00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2352 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3613 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.142 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.143 The recommended git tool is: git 00:00:00.143 using credential 00000000-0000-0000-0000-000000000002 00:00:00.145 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.187 Fetching changes from the remote Git repository 00:00:00.189 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.227 Using shallow fetch with depth 1 00:00:00.227 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.227 > git --version # timeout=10 00:00:00.256 > git --version # 'git version 2.39.2' 00:00:00.256 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.277 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.277 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.329 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.342 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.353 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.353 > git config core.sparsecheckout # timeout=10 00:00:06.363 > git read-tree -mu HEAD # timeout=10 00:00:06.378 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.395 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.396 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.485 [Pipeline] Start of Pipeline 00:00:06.500 [Pipeline] library 00:00:06.502 Loading library shm_lib@master 00:00:06.502 Library shm_lib@master is cached. Copying from home. 00:00:06.521 [Pipeline] node 00:00:06.543 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.545 [Pipeline] { 00:00:06.553 [Pipeline] catchError 00:00:06.554 [Pipeline] { 00:00:06.567 [Pipeline] wrap 00:00:06.575 [Pipeline] { 00:00:06.583 [Pipeline] stage 00:00:06.585 [Pipeline] { (Prologue) 00:00:06.603 [Pipeline] echo 00:00:06.605 Node: VM-host-SM9 00:00:06.612 [Pipeline] cleanWs 00:00:06.622 [WS-CLEANUP] Deleting project workspace... 00:00:06.622 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.628 [WS-CLEANUP] done 00:00:06.830 [Pipeline] setCustomBuildProperty 00:00:06.922 [Pipeline] httpRequest 00:00:07.362 [Pipeline] echo 00:00:07.364 Sorcerer 10.211.164.101 is alive 00:00:07.372 [Pipeline] retry 00:00:07.374 [Pipeline] { 00:00:07.385 [Pipeline] httpRequest 00:00:07.388 HttpMethod: GET 00:00:07.389 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.389 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.390 Response Code: HTTP/1.1 200 OK 00:00:07.390 Success: Status code 200 is in the accepted range: 200,404 00:00:07.391 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.670 [Pipeline] } 00:00:08.690 [Pipeline] // retry 00:00:08.698 [Pipeline] sh 00:00:08.982 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.997 [Pipeline] httpRequest 00:00:09.367 [Pipeline] echo 00:00:09.368 Sorcerer 10.211.164.101 is alive 00:00:09.376 [Pipeline] retry 00:00:09.378 [Pipeline] { 00:00:09.391 [Pipeline] httpRequest 00:00:09.396 HttpMethod: GET 00:00:09.397 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.397 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.398 Response Code: HTTP/1.1 200 OK 00:00:09.399 Success: Status code 200 is in the accepted range: 200,404 00:00:09.399 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:27.388 [Pipeline] } 00:00:27.406 [Pipeline] // retry 00:00:27.414 [Pipeline] sh 00:00:27.695 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:30.236 [Pipeline] sh 00:00:30.517 + git -C spdk log --oneline -n5 00:00:30.517 c13c99a5e test: Various fixes for Fedora40 00:00:30.517 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:30.517 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:30.517 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:30.517 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:30.542 [Pipeline] writeFile 00:00:30.576 [Pipeline] sh 00:00:30.860 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.871 [Pipeline] sh 00:00:31.152 + cat autorun-spdk.conf 00:00:31.152 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.152 SPDK_TEST_NVMF=1 00:00:31.152 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.152 SPDK_TEST_URING=1 00:00:31.152 SPDK_TEST_VFIOUSER=1 00:00:31.152 SPDK_TEST_USDT=1 00:00:31.152 SPDK_RUN_UBSAN=1 00:00:31.152 NET_TYPE=virt 00:00:31.152 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.160 RUN_NIGHTLY=1 00:00:31.162 [Pipeline] } 00:00:31.176 [Pipeline] // stage 00:00:31.191 [Pipeline] stage 00:00:31.193 [Pipeline] { (Run VM) 00:00:31.206 [Pipeline] sh 00:00:31.551 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.551 + echo 'Start stage prepare_nvme.sh' 00:00:31.551 Start stage prepare_nvme.sh 00:00:31.551 + [[ -n 2 ]] 00:00:31.551 + disk_prefix=ex2 00:00:31.551 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:31.551 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:31.551 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:31.551 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.551 ++ SPDK_TEST_NVMF=1 00:00:31.551 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.551 ++ SPDK_TEST_URING=1 00:00:31.551 ++ SPDK_TEST_VFIOUSER=1 00:00:31.551 ++ SPDK_TEST_USDT=1 00:00:31.551 ++ SPDK_RUN_UBSAN=1 00:00:31.551 ++ NET_TYPE=virt 00:00:31.551 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.551 ++ RUN_NIGHTLY=1 00:00:31.551 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:31.551 + nvme_files=() 00:00:31.551 + declare -A nvme_files 00:00:31.551 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.551 + nvme_files['nvme.img']=5G 00:00:31.551 + nvme_files['nvme-cmb.img']=5G 00:00:31.551 + nvme_files['nvme-multi0.img']=4G 00:00:31.551 + nvme_files['nvme-multi1.img']=4G 00:00:31.551 + nvme_files['nvme-multi2.img']=4G 00:00:31.551 + nvme_files['nvme-openstack.img']=8G 00:00:31.551 + nvme_files['nvme-zns.img']=5G 00:00:31.551 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.551 + (( SPDK_TEST_FTL == 1 )) 00:00:31.551 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.551 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.551 + for nvme in "${!nvme_files[@]}" 00:00:31.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:31.552 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.552 + for nvme in "${!nvme_files[@]}" 00:00:31.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:31.552 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.810 + for nvme in "${!nvme_files[@]}" 00:00:31.810 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:31.810 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.810 + for nvme in "${!nvme_files[@]}" 00:00:31.810 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:31.810 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.810 + for nvme in "${!nvme_files[@]}" 00:00:31.810 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:32.069 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.069 + for nvme in "${!nvme_files[@]}" 00:00:32.069 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:32.329 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.329 + for nvme in "${!nvme_files[@]}" 00:00:32.329 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:32.589 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.589 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:32.589 + echo 'End stage prepare_nvme.sh' 00:00:32.589 End stage prepare_nvme.sh 00:00:32.601 [Pipeline] sh 00:00:32.882 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.882 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:33.141 00:00:33.141 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:33.141 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:33.141 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:33.141 HELP=0 00:00:33.141 DRY_RUN=0 00:00:33.141 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:33.141 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.141 NVME_AUTO_CREATE=0 00:00:33.141 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:33.141 NVME_CMB=,, 00:00:33.141 NVME_PMR=,, 00:00:33.141 NVME_ZNS=,, 00:00:33.141 NVME_MS=,, 00:00:33.141 NVME_FDP=,, 00:00:33.141 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.141 SPDK_VAGRANT_VMCPU=10 00:00:33.141 SPDK_VAGRANT_VMRAM=12288 00:00:33.141 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.141 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.141 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.141 SPDK_OPENSTACK_NETWORK=0 00:00:33.141 VAGRANT_PACKAGE_BOX=0 00:00:33.141 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.141 FORCE_DISTRO=true 00:00:33.141 VAGRANT_BOX_VERSION= 00:00:33.141 EXTRA_VAGRANTFILES= 00:00:33.141 NIC_MODEL=e1000 00:00:33.141 00:00:33.141 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:33.141 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:36.430 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.430 ==> default: Creating image (snapshot of base box volume). 00:00:36.690 ==> default: Creating domain with the following settings... 00:00:36.690 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730905145_2c73db7a1b867b718335 00:00:36.690 ==> default: -- Domain type: kvm 00:00:36.690 ==> default: -- Cpus: 10 00:00:36.690 ==> default: -- Feature: acpi 00:00:36.690 ==> default: -- Feature: apic 00:00:36.690 ==> default: -- Feature: pae 00:00:36.690 ==> default: -- Memory: 12288M 00:00:36.690 ==> default: -- Memory Backing: hugepages: 00:00:36.690 ==> default: -- Management MAC: 00:00:36.690 ==> default: -- Loader: 00:00:36.690 ==> default: -- Nvram: 00:00:36.690 ==> default: -- Base box: spdk/fedora39 00:00:36.690 ==> default: -- Storage pool: default 00:00:36.690 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730905145_2c73db7a1b867b718335.img (20G) 00:00:36.690 ==> default: -- Volume Cache: default 00:00:36.690 ==> default: -- Kernel: 00:00:36.690 ==> default: -- Initrd: 00:00:36.690 ==> default: -- Graphics Type: vnc 00:00:36.690 ==> default: -- Graphics Port: -1 00:00:36.690 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.690 ==> default: -- Graphics Password: Not defined 00:00:36.690 ==> default: -- Video Type: cirrus 00:00:36.690 ==> default: -- Video VRAM: 9216 00:00:36.690 ==> default: -- Sound Type: 00:00:36.690 ==> default: -- Keymap: en-us 00:00:36.690 ==> default: -- TPM Path: 00:00:36.690 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.690 ==> default: -- Command line args: 00:00:36.690 ==> default: -> value=-device, 00:00:36.690 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:36.690 ==> default: -> value=-drive, 00:00:36.690 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.690 ==> default: -> value=-device, 00:00:36.690 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.690 ==> default: -> value=-device, 00:00:36.690 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:36.690 ==> default: -> value=-drive, 00:00:36.690 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.690 ==> default: -> value=-device, 00:00:36.690 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.690 ==> default: -> value=-drive, 00:00:36.690 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.690 ==> default: -> value=-device, 00:00:36.690 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.690 ==> default: -> value=-drive, 00:00:36.690 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.690 ==> default: -> value=-device, 00:00:36.690 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.690 ==> default: Creating shared folders metadata... 00:00:36.690 ==> default: Starting domain. 00:00:38.070 ==> default: Waiting for domain to get an IP address... 00:00:56.162 ==> default: Waiting for SSH to become available... 00:00:56.162 ==> default: Configuring and enabling network interfaces... 00:00:58.698 default: SSH address: 192.168.121.222:22 00:00:58.698 default: SSH username: vagrant 00:00:58.698 default: SSH auth method: private key 00:01:01.231 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.350 ==> default: Mounting SSHFS shared folder... 00:01:10.288 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.288 ==> default: Checking Mount.. 00:01:11.665 ==> default: Folder Successfully Mounted! 00:01:11.665 ==> default: Running provisioner: file... 00:01:12.601 default: ~/.gitconfig => .gitconfig 00:01:12.861 00:01:12.861 SUCCESS! 00:01:12.861 00:01:12.861 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:12.861 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:12.861 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:12.861 00:01:12.870 [Pipeline] } 00:01:12.884 [Pipeline] // stage 00:01:12.893 [Pipeline] dir 00:01:12.894 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:12.895 [Pipeline] { 00:01:12.909 [Pipeline] catchError 00:01:12.911 [Pipeline] { 00:01:12.923 [Pipeline] sh 00:01:13.203 + vagrant ssh-config --host vagrant 00:01:13.204 + sed -ne /^Host/,$p 00:01:13.204 + tee ssh_conf 00:01:16.491 Host vagrant 00:01:16.491 HostName 192.168.121.222 00:01:16.491 User vagrant 00:01:16.491 Port 22 00:01:16.491 UserKnownHostsFile /dev/null 00:01:16.491 StrictHostKeyChecking no 00:01:16.491 PasswordAuthentication no 00:01:16.491 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:16.491 IdentitiesOnly yes 00:01:16.491 LogLevel FATAL 00:01:16.491 ForwardAgent yes 00:01:16.491 ForwardX11 yes 00:01:16.491 00:01:16.505 [Pipeline] withEnv 00:01:16.508 [Pipeline] { 00:01:16.520 [Pipeline] sh 00:01:16.800 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:16.800 source /etc/os-release 00:01:16.800 [[ -e /image.version ]] && img=$(< /image.version) 00:01:16.800 # Minimal, systemd-like check. 00:01:16.800 if [[ -e /.dockerenv ]]; then 00:01:16.800 # Clear garbage from the node's name: 00:01:16.800 # agt-er_autotest_547-896 -> autotest_547-896 00:01:16.800 # $HOSTNAME is the actual container id 00:01:16.800 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:16.800 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:16.800 # We can assume this is a mount from a host where container is running, 00:01:16.800 # so fetch its hostname to easily identify the target swarm worker. 00:01:16.800 container="$(< /etc/hostname) ($agent)" 00:01:16.800 else 00:01:16.800 # Fallback 00:01:16.800 container=$agent 00:01:16.800 fi 00:01:16.800 fi 00:01:16.800 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:16.800 00:01:17.071 [Pipeline] } 00:01:17.086 [Pipeline] // withEnv 00:01:17.095 [Pipeline] setCustomBuildProperty 00:01:17.111 [Pipeline] stage 00:01:17.113 [Pipeline] { (Tests) 00:01:17.131 [Pipeline] sh 00:01:17.445 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.518 [Pipeline] sh 00:01:17.799 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:18.073 [Pipeline] timeout 00:01:18.074 Timeout set to expire in 1 hr 0 min 00:01:18.075 [Pipeline] { 00:01:18.089 [Pipeline] sh 00:01:18.369 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:18.937 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:18.949 [Pipeline] sh 00:01:19.228 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.501 [Pipeline] sh 00:01:19.784 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:20.058 [Pipeline] sh 00:01:20.340 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:20.598 ++ readlink -f spdk_repo 00:01:20.598 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.598 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.598 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.598 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.598 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.598 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.598 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.598 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:20.598 + cd /home/vagrant/spdk_repo 00:01:20.598 + source /etc/os-release 00:01:20.598 ++ NAME='Fedora Linux' 00:01:20.598 ++ VERSION='39 (Cloud Edition)' 00:01:20.598 ++ ID=fedora 00:01:20.598 ++ VERSION_ID=39 00:01:20.598 ++ VERSION_CODENAME= 00:01:20.598 ++ PLATFORM_ID=platform:f39 00:01:20.598 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.598 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.598 ++ LOGO=fedora-logo-icon 00:01:20.598 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.598 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.598 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.598 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.598 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.598 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.598 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.598 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.598 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.598 ++ SUPPORT_END=2024-11-12 00:01:20.598 ++ VARIANT='Cloud Edition' 00:01:20.598 ++ VARIANT_ID=cloud 00:01:20.598 + uname -a 00:01:20.598 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.598 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:20.598 Hugepages 00:01:20.598 node hugesize free / total 00:01:20.598 node0 1048576kB 0 / 0 00:01:20.598 node0 2048kB 0 / 0 00:01:20.598 00:01:20.598 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.598 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:20.598 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:20.598 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:20.598 + rm -f /tmp/spdk-ld-path 00:01:20.598 + source autorun-spdk.conf 00:01:20.598 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.598 ++ SPDK_TEST_NVMF=1 00:01:20.598 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.598 ++ SPDK_TEST_URING=1 00:01:20.598 ++ SPDK_TEST_VFIOUSER=1 00:01:20.598 ++ SPDK_TEST_USDT=1 00:01:20.598 ++ SPDK_RUN_UBSAN=1 00:01:20.598 ++ NET_TYPE=virt 00:01:20.598 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.598 ++ RUN_NIGHTLY=1 00:01:20.598 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.598 + [[ -n '' ]] 00:01:20.598 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:20.857 + for M in /var/spdk/build-*-manifest.txt 00:01:20.857 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:20.857 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.857 + for M in /var/spdk/build-*-manifest.txt 00:01:20.857 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.857 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.857 + for M in /var/spdk/build-*-manifest.txt 00:01:20.857 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.857 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.857 ++ uname 00:01:20.857 + [[ Linux == \L\i\n\u\x ]] 00:01:20.857 + sudo dmesg -T 00:01:20.857 + sudo dmesg --clear 00:01:20.857 + dmesg_pid=5231 00:01:20.857 + [[ Fedora Linux == FreeBSD ]] 00:01:20.857 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.857 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.857 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.857 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.857 + sudo dmesg -Tw 00:01:20.857 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.857 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.857 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.857 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.857 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.857 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.857 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.857 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.857 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.857 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.857 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.857 Test configuration: 00:01:20.857 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.857 SPDK_TEST_NVMF=1 00:01:20.857 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.857 SPDK_TEST_URING=1 00:01:20.857 SPDK_TEST_VFIOUSER=1 00:01:20.857 SPDK_TEST_USDT=1 00:01:20.857 SPDK_RUN_UBSAN=1 00:01:20.857 NET_TYPE=virt 00:01:20.857 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.857 RUN_NIGHTLY=1 14:59:50 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:20.857 14:59:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:20.857 14:59:50 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.857 14:59:50 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.857 14:59:50 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.857 14:59:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.857 14:59:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.857 14:59:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.857 14:59:50 -- paths/export.sh@5 -- $ export PATH 00:01:20.857 14:59:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.857 14:59:50 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:20.857 14:59:50 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:20.857 14:59:50 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730905190.XXXXXX 00:01:20.857 14:59:50 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730905190.lhpD7R 00:01:20.857 14:59:50 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:20.857 14:59:50 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:20.857 14:59:50 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:20.857 14:59:50 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:20.857 14:59:50 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.857 14:59:50 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:20.857 14:59:50 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:20.857 14:59:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.857 14:59:50 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:20.857 14:59:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.857 14:59:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.857 14:59:50 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:20.857 14:59:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.857 Wed Nov 6 02:59:50 PM UTC 2024 00:01:20.857 14:59:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.857 LTS-67-gc13c99a5e 00:01:20.857 14:59:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.857 14:59:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.857 14:59:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.857 14:59:50 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:20.857 14:59:50 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:20.857 14:59:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.116 ************************************ 00:01:21.116 START TEST ubsan 00:01:21.116 ************************************ 00:01:21.116 using ubsan 00:01:21.116 14:59:50 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:21.116 00:01:21.116 real 0m0.000s 00:01:21.117 user 0m0.000s 00:01:21.117 sys 0m0.000s 00:01:21.117 14:59:50 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:21.117 ************************************ 00:01:21.117 END TEST ubsan 00:01:21.117 14:59:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.117 ************************************ 00:01:21.117 14:59:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.117 14:59:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.117 14:59:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.117 14:59:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.117 14:59:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.117 14:59:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.117 14:59:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.117 14:59:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.117 14:59:50 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:21.375 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.375 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.635 Using 'verbs' RDMA provider 00:01:34.458 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:49.362 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:49.362 Creating mk/config.mk...done. 00:01:49.362 Creating mk/cc.flags.mk...done. 00:01:49.362 Type 'make' to build. 00:01:49.362 15:00:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:49.362 15:00:17 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:49.362 15:00:17 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:49.362 15:00:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.362 ************************************ 00:01:49.362 START TEST make 00:01:49.362 ************************************ 00:01:49.362 15:00:17 -- common/autotest_common.sh@1114 -- $ make -j10 00:01:49.362 make[1]: Nothing to be done for 'all'. 00:01:49.621 The Meson build system 00:01:49.621 Version: 1.5.0 00:01:49.621 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:01:49.621 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:49.621 Build type: native build 00:01:49.621 Project name: libvfio-user 00:01:49.621 Project version: 0.0.1 00:01:49.621 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:49.621 C linker for the host machine: cc ld.bfd 2.40-14 00:01:49.621 Host machine cpu family: x86_64 00:01:49.621 Host machine cpu: x86_64 00:01:49.621 Run-time dependency threads found: YES 00:01:49.621 Library dl found: YES 00:01:49.621 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:49.621 Run-time dependency json-c found: YES 0.17 00:01:49.621 Run-time dependency cmocka found: YES 1.1.7 00:01:49.621 Program pytest-3 found: NO 00:01:49.621 Program flake8 found: NO 00:01:49.621 Program misspell-fixer found: NO 00:01:49.621 Program restructuredtext-lint found: NO 00:01:49.621 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.621 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.621 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.621 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.621 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.621 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.621 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.621 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.621 Build targets in project: 8 00:01:49.621 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.621 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.621 00:01:49.621 libvfio-user 0.0.1 00:01:49.621 00:01:49.621 User defined options 00:01:49.621 buildtype : debug 00:01:49.621 default_library: shared 00:01:49.621 libdir : /usr/local/lib 00:01:49.621 00:01:49.621 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.880 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:50.139 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:50.139 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:50.139 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:50.139 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:50.139 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:50.139 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:50.139 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:50.139 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:50.139 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:50.139 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:50.139 [11/37] Compiling C object samples/null.p/null.c.o 00:01:50.139 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:50.139 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:50.139 [14/37] Compiling C object samples/client.p/client.c.o 00:01:50.397 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:50.397 [16/37] Compiling C object samples/server.p/server.c.o 00:01:50.397 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:50.397 [18/37] Linking target samples/client 00:01:50.397 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:50.397 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:50.397 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:50.397 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:50.397 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:50.397 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:50.397 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:50.397 [26/37] Linking target lib/libvfio-user.so.0.0.1 00:01:50.397 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:50.397 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:50.397 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:50.655 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:50.655 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:50.655 [32/37] Linking target samples/server 00:01:50.655 [33/37] Linking target samples/lspci 00:01:50.655 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:50.655 [35/37] Linking target samples/null 00:01:50.655 [36/37] Linking target samples/gpio-pci-idio-16 00:01:50.655 [37/37] Linking target test/unit_tests 00:01:50.655 INFO: autodetecting backend as ninja 00:01:50.655 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:50.655 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:51.281 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:51.281 ninja: no work to do. 00:01:59.399 The Meson build system 00:01:59.399 Version: 1.5.0 00:01:59.399 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:59.399 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:59.399 Build type: native build 00:01:59.399 Program cat found: YES (/usr/bin/cat) 00:01:59.399 Project name: DPDK 00:01:59.399 Project version: 23.11.0 00:01:59.399 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.399 C linker for the host machine: cc ld.bfd 2.40-14 00:01:59.399 Host machine cpu family: x86_64 00:01:59.399 Host machine cpu: x86_64 00:01:59.399 Message: ## Building in Developer Mode ## 00:01:59.399 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.399 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.399 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.399 Program python3 found: YES (/usr/bin/python3) 00:01:59.399 Program cat found: YES (/usr/bin/cat) 00:01:59.399 Compiler for C supports arguments -march=native: YES 00:01:59.399 Checking for size of "void *" : 8 00:01:59.399 Checking for size of "void *" : 8 (cached) 00:01:59.399 Library m found: YES 00:01:59.399 Library numa found: YES 00:01:59.399 Has header "numaif.h" : YES 00:01:59.399 Library fdt found: NO 00:01:59.399 Library execinfo found: NO 00:01:59.399 Has header "execinfo.h" : YES 00:01:59.399 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.399 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.399 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.399 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.399 Run-time dependency openssl found: YES 3.1.1 00:01:59.399 Run-time dependency libpcap found: YES 1.10.4 00:01:59.399 Has header "pcap.h" with dependency libpcap: YES 00:01:59.399 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.399 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.399 Compiler for C supports arguments -Wformat: YES 00:01:59.399 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.399 Compiler for C supports arguments -Wformat-security: NO 00:01:59.399 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.399 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.399 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.399 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.399 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.399 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.399 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.399 Compiler for C supports arguments -Wundef: YES 00:01:59.399 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.400 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.400 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.400 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.400 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.400 Program objdump found: YES (/usr/bin/objdump) 00:01:59.400 Compiler for C supports arguments -mavx512f: YES 00:01:59.400 Checking if "AVX512 checking" compiles: YES 00:01:59.400 Fetching value of define "__SSE4_2__" : 1 00:01:59.400 Fetching value of define "__AES__" : 1 00:01:59.400 Fetching value of define "__AVX__" : 1 00:01:59.400 Fetching value of define "__AVX2__" : 1 00:01:59.400 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.400 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.400 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.400 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.400 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.400 Fetching value of define "__PCLMUL__" : 1 00:01:59.400 Fetching value of define "__RDRND__" : 1 00:01:59.400 Fetching value of define "__RDSEED__" : 1 00:01:59.400 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.400 Fetching value of define "__znver1__" : (undefined) 00:01:59.400 Fetching value of define "__znver2__" : (undefined) 00:01:59.400 Fetching value of define "__znver3__" : (undefined) 00:01:59.400 Fetching value of define "__znver4__" : (undefined) 00:01:59.400 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.400 Message: lib/log: Defining dependency "log" 00:01:59.400 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.400 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.400 Checking for function "getentropy" : NO 00:01:59.400 Message: lib/eal: Defining dependency "eal" 00:01:59.400 Message: lib/ring: Defining dependency "ring" 00:01:59.400 Message: lib/rcu: Defining dependency "rcu" 00:01:59.400 Message: lib/mempool: Defining dependency "mempool" 00:01:59.400 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.400 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.400 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.400 Compiler for C supports arguments -mpclmul: YES 00:01:59.400 Compiler for C supports arguments -maes: YES 00:01:59.400 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.400 Compiler for C supports arguments -mavx512bw: YES 00:01:59.400 Compiler for C supports arguments -mavx512dq: YES 00:01:59.400 Compiler for C supports arguments -mavx512vl: YES 00:01:59.400 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.400 Compiler for C supports arguments -mavx2: YES 00:01:59.400 Compiler for C supports arguments -mavx: YES 00:01:59.400 Message: lib/net: Defining dependency "net" 00:01:59.400 Message: lib/meter: Defining dependency "meter" 00:01:59.400 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.400 Message: lib/pci: Defining dependency "pci" 00:01:59.400 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.400 Message: lib/hash: Defining dependency "hash" 00:01:59.400 Message: lib/timer: Defining dependency "timer" 00:01:59.400 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.400 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.400 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.400 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.400 Message: lib/power: Defining dependency "power" 00:01:59.400 Message: lib/reorder: Defining dependency "reorder" 00:01:59.400 Message: lib/security: Defining dependency "security" 00:01:59.400 Has header "linux/userfaultfd.h" : YES 00:01:59.400 Has header "linux/vduse.h" : YES 00:01:59.400 Message: lib/vhost: Defining dependency "vhost" 00:01:59.400 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.400 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.400 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.400 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.400 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.400 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.400 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.400 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.400 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.400 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.400 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:59.400 Configuring doxy-api-html.conf using configuration 00:01:59.400 Configuring doxy-api-man.conf using configuration 00:01:59.400 Program mandb found: YES (/usr/bin/mandb) 00:01:59.400 Program sphinx-build found: NO 00:01:59.400 Configuring rte_build_config.h using configuration 00:01:59.400 Message: 00:01:59.400 ================= 00:01:59.400 Applications Enabled 00:01:59.400 ================= 00:01:59.400 00:01:59.400 apps: 00:01:59.400 00:01:59.400 00:01:59.400 Message: 00:01:59.400 ================= 00:01:59.400 Libraries Enabled 00:01:59.400 ================= 00:01:59.400 00:01:59.400 libs: 00:01:59.400 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.400 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.400 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.400 00:01:59.400 Message: 00:01:59.400 =============== 00:01:59.400 Drivers Enabled 00:01:59.400 =============== 00:01:59.400 00:01:59.400 common: 00:01:59.400 00:01:59.400 bus: 00:01:59.400 pci, vdev, 00:01:59.400 mempool: 00:01:59.400 ring, 00:01:59.400 dma: 00:01:59.400 00:01:59.400 net: 00:01:59.400 00:01:59.400 crypto: 00:01:59.400 00:01:59.400 compress: 00:01:59.400 00:01:59.400 vdpa: 00:01:59.400 00:01:59.400 00:01:59.400 Message: 00:01:59.400 ================= 00:01:59.400 Content Skipped 00:01:59.400 ================= 00:01:59.400 00:01:59.400 apps: 00:01:59.400 dumpcap: explicitly disabled via build config 00:01:59.400 graph: explicitly disabled via build config 00:01:59.400 pdump: explicitly disabled via build config 00:01:59.400 proc-info: explicitly disabled via build config 00:01:59.400 test-acl: explicitly disabled via build config 00:01:59.400 test-bbdev: explicitly disabled via build config 00:01:59.400 test-cmdline: explicitly disabled via build config 00:01:59.400 test-compress-perf: explicitly disabled via build config 00:01:59.400 test-crypto-perf: explicitly disabled via build config 00:01:59.400 test-dma-perf: explicitly disabled via build config 00:01:59.400 test-eventdev: explicitly disabled via build config 00:01:59.400 test-fib: explicitly disabled via build config 00:01:59.400 test-flow-perf: explicitly disabled via build config 00:01:59.400 test-gpudev: explicitly disabled via build config 00:01:59.400 test-mldev: explicitly disabled via build config 00:01:59.400 test-pipeline: explicitly disabled via build config 00:01:59.400 test-pmd: explicitly disabled via build config 00:01:59.400 test-regex: explicitly disabled via build config 00:01:59.400 test-sad: explicitly disabled via build config 00:01:59.400 test-security-perf: explicitly disabled via build config 00:01:59.400 00:01:59.400 libs: 00:01:59.400 metrics: explicitly disabled via build config 00:01:59.400 acl: explicitly disabled via build config 00:01:59.400 bbdev: explicitly disabled via build config 00:01:59.400 bitratestats: explicitly disabled via build config 00:01:59.400 bpf: explicitly disabled via build config 00:01:59.400 cfgfile: explicitly disabled via build config 00:01:59.400 distributor: explicitly disabled via build config 00:01:59.400 efd: explicitly disabled via build config 00:01:59.400 eventdev: explicitly disabled via build config 00:01:59.400 dispatcher: explicitly disabled via build config 00:01:59.400 gpudev: explicitly disabled via build config 00:01:59.400 gro: explicitly disabled via build config 00:01:59.400 gso: explicitly disabled via build config 00:01:59.400 ip_frag: explicitly disabled via build config 00:01:59.400 jobstats: explicitly disabled via build config 00:01:59.400 latencystats: explicitly disabled via build config 00:01:59.400 lpm: explicitly disabled via build config 00:01:59.400 member: explicitly disabled via build config 00:01:59.400 pcapng: explicitly disabled via build config 00:01:59.400 rawdev: explicitly disabled via build config 00:01:59.400 regexdev: explicitly disabled via build config 00:01:59.400 mldev: explicitly disabled via build config 00:01:59.400 rib: explicitly disabled via build config 00:01:59.400 sched: explicitly disabled via build config 00:01:59.400 stack: explicitly disabled via build config 00:01:59.400 ipsec: explicitly disabled via build config 00:01:59.400 pdcp: explicitly disabled via build config 00:01:59.400 fib: explicitly disabled via build config 00:01:59.400 port: explicitly disabled via build config 00:01:59.400 pdump: explicitly disabled via build config 00:01:59.400 table: explicitly disabled via build config 00:01:59.400 pipeline: explicitly disabled via build config 00:01:59.400 graph: explicitly disabled via build config 00:01:59.400 node: explicitly disabled via build config 00:01:59.400 00:01:59.400 drivers: 00:01:59.400 common/cpt: not in enabled drivers build config 00:01:59.400 common/dpaax: not in enabled drivers build config 00:01:59.400 common/iavf: not in enabled drivers build config 00:01:59.400 common/idpf: not in enabled drivers build config 00:01:59.400 common/mvep: not in enabled drivers build config 00:01:59.400 common/octeontx: not in enabled drivers build config 00:01:59.400 bus/auxiliary: not in enabled drivers build config 00:01:59.400 bus/cdx: not in enabled drivers build config 00:01:59.400 bus/dpaa: not in enabled drivers build config 00:01:59.400 bus/fslmc: not in enabled drivers build config 00:01:59.400 bus/ifpga: not in enabled drivers build config 00:01:59.400 bus/platform: not in enabled drivers build config 00:01:59.400 bus/vmbus: not in enabled drivers build config 00:01:59.400 common/cnxk: not in enabled drivers build config 00:01:59.400 common/mlx5: not in enabled drivers build config 00:01:59.400 common/nfp: not in enabled drivers build config 00:01:59.400 common/qat: not in enabled drivers build config 00:01:59.400 common/sfc_efx: not in enabled drivers build config 00:01:59.400 mempool/bucket: not in enabled drivers build config 00:01:59.400 mempool/cnxk: not in enabled drivers build config 00:01:59.401 mempool/dpaa: not in enabled drivers build config 00:01:59.401 mempool/dpaa2: not in enabled drivers build config 00:01:59.401 mempool/octeontx: not in enabled drivers build config 00:01:59.401 mempool/stack: not in enabled drivers build config 00:01:59.401 dma/cnxk: not in enabled drivers build config 00:01:59.401 dma/dpaa: not in enabled drivers build config 00:01:59.401 dma/dpaa2: not in enabled drivers build config 00:01:59.401 dma/hisilicon: not in enabled drivers build config 00:01:59.401 dma/idxd: not in enabled drivers build config 00:01:59.401 dma/ioat: not in enabled drivers build config 00:01:59.401 dma/skeleton: not in enabled drivers build config 00:01:59.401 net/af_packet: not in enabled drivers build config 00:01:59.401 net/af_xdp: not in enabled drivers build config 00:01:59.401 net/ark: not in enabled drivers build config 00:01:59.401 net/atlantic: not in enabled drivers build config 00:01:59.401 net/avp: not in enabled drivers build config 00:01:59.401 net/axgbe: not in enabled drivers build config 00:01:59.401 net/bnx2x: not in enabled drivers build config 00:01:59.401 net/bnxt: not in enabled drivers build config 00:01:59.401 net/bonding: not in enabled drivers build config 00:01:59.401 net/cnxk: not in enabled drivers build config 00:01:59.401 net/cpfl: not in enabled drivers build config 00:01:59.401 net/cxgbe: not in enabled drivers build config 00:01:59.401 net/dpaa: not in enabled drivers build config 00:01:59.401 net/dpaa2: not in enabled drivers build config 00:01:59.401 net/e1000: not in enabled drivers build config 00:01:59.401 net/ena: not in enabled drivers build config 00:01:59.401 net/enetc: not in enabled drivers build config 00:01:59.401 net/enetfec: not in enabled drivers build config 00:01:59.401 net/enic: not in enabled drivers build config 00:01:59.401 net/failsafe: not in enabled drivers build config 00:01:59.401 net/fm10k: not in enabled drivers build config 00:01:59.401 net/gve: not in enabled drivers build config 00:01:59.401 net/hinic: not in enabled drivers build config 00:01:59.401 net/hns3: not in enabled drivers build config 00:01:59.401 net/i40e: not in enabled drivers build config 00:01:59.401 net/iavf: not in enabled drivers build config 00:01:59.401 net/ice: not in enabled drivers build config 00:01:59.401 net/idpf: not in enabled drivers build config 00:01:59.401 net/igc: not in enabled drivers build config 00:01:59.401 net/ionic: not in enabled drivers build config 00:01:59.401 net/ipn3ke: not in enabled drivers build config 00:01:59.401 net/ixgbe: not in enabled drivers build config 00:01:59.401 net/mana: not in enabled drivers build config 00:01:59.401 net/memif: not in enabled drivers build config 00:01:59.401 net/mlx4: not in enabled drivers build config 00:01:59.401 net/mlx5: not in enabled drivers build config 00:01:59.401 net/mvneta: not in enabled drivers build config 00:01:59.401 net/mvpp2: not in enabled drivers build config 00:01:59.401 net/netvsc: not in enabled drivers build config 00:01:59.401 net/nfb: not in enabled drivers build config 00:01:59.401 net/nfp: not in enabled drivers build config 00:01:59.401 net/ngbe: not in enabled drivers build config 00:01:59.401 net/null: not in enabled drivers build config 00:01:59.401 net/octeontx: not in enabled drivers build config 00:01:59.401 net/octeon_ep: not in enabled drivers build config 00:01:59.401 net/pcap: not in enabled drivers build config 00:01:59.401 net/pfe: not in enabled drivers build config 00:01:59.401 net/qede: not in enabled drivers build config 00:01:59.401 net/ring: not in enabled drivers build config 00:01:59.401 net/sfc: not in enabled drivers build config 00:01:59.401 net/softnic: not in enabled drivers build config 00:01:59.401 net/tap: not in enabled drivers build config 00:01:59.401 net/thunderx: not in enabled drivers build config 00:01:59.401 net/txgbe: not in enabled drivers build config 00:01:59.401 net/vdev_netvsc: not in enabled drivers build config 00:01:59.401 net/vhost: not in enabled drivers build config 00:01:59.401 net/virtio: not in enabled drivers build config 00:01:59.401 net/vmxnet3: not in enabled drivers build config 00:01:59.401 raw/*: missing internal dependency, "rawdev" 00:01:59.401 crypto/armv8: not in enabled drivers build config 00:01:59.401 crypto/bcmfs: not in enabled drivers build config 00:01:59.401 crypto/caam_jr: not in enabled drivers build config 00:01:59.401 crypto/ccp: not in enabled drivers build config 00:01:59.401 crypto/cnxk: not in enabled drivers build config 00:01:59.401 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.401 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.401 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.401 crypto/mlx5: not in enabled drivers build config 00:01:59.401 crypto/mvsam: not in enabled drivers build config 00:01:59.401 crypto/nitrox: not in enabled drivers build config 00:01:59.401 crypto/null: not in enabled drivers build config 00:01:59.401 crypto/octeontx: not in enabled drivers build config 00:01:59.401 crypto/openssl: not in enabled drivers build config 00:01:59.401 crypto/scheduler: not in enabled drivers build config 00:01:59.401 crypto/uadk: not in enabled drivers build config 00:01:59.401 crypto/virtio: not in enabled drivers build config 00:01:59.401 compress/isal: not in enabled drivers build config 00:01:59.401 compress/mlx5: not in enabled drivers build config 00:01:59.401 compress/octeontx: not in enabled drivers build config 00:01:59.401 compress/zlib: not in enabled drivers build config 00:01:59.401 regex/*: missing internal dependency, "regexdev" 00:01:59.401 ml/*: missing internal dependency, "mldev" 00:01:59.401 vdpa/ifc: not in enabled drivers build config 00:01:59.401 vdpa/mlx5: not in enabled drivers build config 00:01:59.401 vdpa/nfp: not in enabled drivers build config 00:01:59.401 vdpa/sfc: not in enabled drivers build config 00:01:59.401 event/*: missing internal dependency, "eventdev" 00:01:59.401 baseband/*: missing internal dependency, "bbdev" 00:01:59.401 gpu/*: missing internal dependency, "gpudev" 00:01:59.401 00:01:59.401 00:01:59.401 Build targets in project: 85 00:01:59.401 00:01:59.401 DPDK 23.11.0 00:01:59.401 00:01:59.401 User defined options 00:01:59.401 buildtype : debug 00:01:59.401 default_library : shared 00:01:59.401 libdir : lib 00:01:59.401 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.401 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:59.401 c_link_args : 00:01:59.401 cpu_instruction_set: native 00:01:59.401 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.401 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.401 enable_docs : false 00:01:59.401 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.401 enable_kmods : false 00:01:59.401 tests : false 00:01:59.401 00:01:59.401 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.401 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:59.401 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.401 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.401 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.401 [4/265] Linking static target lib/librte_kvargs.a 00:01:59.401 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.401 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.401 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.401 [8/265] Linking static target lib/librte_log.a 00:01:59.401 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.401 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.661 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.919 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.920 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.920 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.920 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.179 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.179 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.179 [18/265] Linking static target lib/librte_telemetry.a 00:02:00.179 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.179 [20/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.179 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.179 [22/265] Linking target lib/librte_log.so.24.0 00:02:00.440 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.440 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:00.440 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.699 [26/265] Linking target lib/librte_kvargs.so.24.0 00:02:00.699 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.957 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:00.957 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.957 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.957 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.957 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.957 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:00.957 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.215 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.215 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:01.215 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.474 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.474 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.474 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.474 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.474 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.474 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.732 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.732 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.732 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.991 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.991 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.249 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.249 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.249 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.508 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.508 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.508 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.508 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.508 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.767 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.767 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.767 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.026 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.026 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.026 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.026 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.026 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.284 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.284 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.284 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.543 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.802 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.802 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.802 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.060 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.060 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.060 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.060 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.060 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.060 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.060 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.319 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.319 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.319 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.577 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.836 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.836 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.836 [85/265] Linking static target lib/librte_ring.a 00:02:05.094 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.094 [87/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.094 [88/265] Linking static target lib/librte_eal.a 00:02:05.094 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.094 [90/265] Linking static target lib/librte_rcu.a 00:02:05.094 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:05.353 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.353 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:05.353 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.353 [95/265] Linking static target lib/librte_mempool.a 00:02:05.353 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.612 [97/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.612 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.871 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.871 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.871 [101/265] Linking static target lib/librte_mbuf.a 00:02:06.129 [102/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.129 [103/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.129 [104/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.388 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.388 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:06.388 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.646 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.646 [109/265] Linking static target lib/librte_net.a 00:02:06.646 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.646 [111/265] Linking static target lib/librte_meter.a 00:02:06.646 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.905 [113/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.163 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.163 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.163 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.163 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.422 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.422 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.357 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.357 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.357 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.357 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.357 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.615 [125/265] Linking static target lib/librte_pci.a 00:02:08.615 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.615 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.615 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.616 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.874 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.874 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.874 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.874 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.874 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.874 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.874 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.874 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.874 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.874 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.874 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.140 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.140 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.140 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.442 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.442 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.442 [146/265] Linking static target lib/librte_cmdline.a 00:02:09.442 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.442 [148/265] Linking static target lib/librte_ethdev.a 00:02:09.443 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.443 [150/265] Linking static target lib/librte_timer.a 00:02:09.709 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.709 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.709 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.968 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.968 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.968 [156/265] Linking static target lib/librte_compressdev.a 00:02:10.226 [157/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.226 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.226 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.486 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.486 [161/265] Linking static target lib/librte_hash.a 00:02:10.486 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.748 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.748 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.748 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:10.748 [166/265] Linking static target lib/librte_dmadev.a 00:02:11.006 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.006 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.006 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.006 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.006 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.006 [172/265] Linking static target lib/librte_cryptodev.a 00:02:11.264 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.523 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:11.523 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.523 [176/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.523 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.523 [178/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.523 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.781 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.781 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.040 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.298 [183/265] Linking static target lib/librte_power.a 00:02:12.298 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.298 [185/265] Linking static target lib/librte_reorder.a 00:02:12.298 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.298 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.866 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.866 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.866 [190/265] Linking static target lib/librte_security.a 00:02:12.866 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.866 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.434 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.434 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.434 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.434 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.434 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.434 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.434 [199/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.001 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.001 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.001 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.001 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.001 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:14.260 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.260 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.260 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:14.260 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.260 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:14.518 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:14.518 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.518 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.518 [213/265] Linking static target drivers/librte_bus_pci.a 00:02:14.518 [214/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:14.518 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.518 [216/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.518 [217/265] Linking static target drivers/librte_bus_vdev.a 00:02:14.777 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.777 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.777 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.777 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.777 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.777 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.777 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:15.035 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.971 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.971 [227/265] Linking static target lib/librte_vhost.a 00:02:16.229 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.488 [229/265] Linking target lib/librte_eal.so.24.0 00:02:16.488 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:16.488 [231/265] Linking target lib/librte_ring.so.24.0 00:02:16.488 [232/265] Linking target lib/librte_meter.so.24.0 00:02:16.488 [233/265] Linking target lib/librte_timer.so.24.0 00:02:16.488 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:16.488 [235/265] Linking target lib/librte_pci.so.24.0 00:02:16.488 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:16.747 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:16.747 [238/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.747 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:16.747 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:16.747 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:16.747 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:16.747 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:16.747 [244/265] Linking target lib/librte_mempool.so.24.0 00:02:16.747 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:17.006 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:17.006 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:17.006 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:17.006 [249/265] Linking target lib/librte_mbuf.so.24.0 00:02:17.006 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:17.265 [251/265] Linking target lib/librte_reorder.so.24.0 00:02:17.265 [252/265] Linking target lib/librte_net.so.24.0 00:02:17.265 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:17.265 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:02:17.265 [255/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.265 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:17.265 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:17.265 [258/265] Linking target lib/librte_security.so.24.0 00:02:17.265 [259/265] Linking target lib/librte_hash.so.24.0 00:02:17.265 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:17.523 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:17.523 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:17.523 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:17.523 [264/265] Linking target lib/librte_power.so.24.0 00:02:17.782 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:17.782 INFO: autodetecting backend as ninja 00:02:17.782 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:18.718 CC lib/ut/ut.o 00:02:18.718 CC lib/ut_mock/mock.o 00:02:18.718 CC lib/log/log.o 00:02:18.718 CC lib/log/log_flags.o 00:02:18.718 CC lib/log/log_deprecated.o 00:02:18.977 LIB libspdk_ut_mock.a 00:02:18.977 SO libspdk_ut_mock.so.5.0 00:02:18.977 LIB libspdk_log.a 00:02:18.977 LIB libspdk_ut.a 00:02:18.977 SO libspdk_ut.so.1.0 00:02:18.977 SO libspdk_log.so.6.1 00:02:18.977 SYMLINK libspdk_ut_mock.so 00:02:18.977 SYMLINK libspdk_ut.so 00:02:18.977 SYMLINK libspdk_log.so 00:02:19.235 CC lib/dma/dma.o 00:02:19.235 CXX lib/trace_parser/trace.o 00:02:19.235 CC lib/ioat/ioat.o 00:02:19.235 CC lib/util/base64.o 00:02:19.235 CC lib/util/bit_array.o 00:02:19.235 CC lib/util/cpuset.o 00:02:19.235 CC lib/util/crc16.o 00:02:19.235 CC lib/util/crc32c.o 00:02:19.235 CC lib/util/crc32.o 00:02:19.235 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.494 CC lib/util/crc32_ieee.o 00:02:19.494 CC lib/util/crc64.o 00:02:19.494 CC lib/util/dif.o 00:02:19.494 CC lib/util/fd.o 00:02:19.494 CC lib/util/file.o 00:02:19.494 LIB libspdk_dma.a 00:02:19.494 SO libspdk_dma.so.3.0 00:02:19.494 CC lib/vfio_user/host/vfio_user.o 00:02:19.494 CC lib/util/hexlify.o 00:02:19.494 CC lib/util/iov.o 00:02:19.494 LIB libspdk_ioat.a 00:02:19.494 SYMLINK libspdk_dma.so 00:02:19.494 CC lib/util/math.o 00:02:19.494 SO libspdk_ioat.so.6.0 00:02:19.753 CC lib/util/pipe.o 00:02:19.753 CC lib/util/strerror_tls.o 00:02:19.753 CC lib/util/string.o 00:02:19.753 SYMLINK libspdk_ioat.so 00:02:19.753 CC lib/util/uuid.o 00:02:19.753 CC lib/util/fd_group.o 00:02:19.753 LIB libspdk_vfio_user.a 00:02:19.753 CC lib/util/xor.o 00:02:19.753 SO libspdk_vfio_user.so.4.0 00:02:19.753 CC lib/util/zipf.o 00:02:19.753 SYMLINK libspdk_vfio_user.so 00:02:20.012 LIB libspdk_util.a 00:02:20.012 SO libspdk_util.so.8.0 00:02:20.271 SYMLINK libspdk_util.so 00:02:20.271 LIB libspdk_trace_parser.a 00:02:20.271 CC lib/vmd/vmd.o 00:02:20.271 CC lib/rdma/common.o 00:02:20.271 CC lib/json/json_parse.o 00:02:20.271 CC lib/rdma/rdma_verbs.o 00:02:20.271 CC lib/vmd/led.o 00:02:20.271 CC lib/json/json_util.o 00:02:20.271 CC lib/idxd/idxd.o 00:02:20.271 CC lib/conf/conf.o 00:02:20.271 CC lib/env_dpdk/env.o 00:02:20.271 SO libspdk_trace_parser.so.4.0 00:02:20.530 SYMLINK libspdk_trace_parser.so 00:02:20.530 CC lib/env_dpdk/memory.o 00:02:20.530 CC lib/env_dpdk/pci.o 00:02:20.530 CC lib/env_dpdk/init.o 00:02:20.530 CC lib/env_dpdk/threads.o 00:02:20.530 CC lib/json/json_write.o 00:02:20.530 LIB libspdk_conf.a 00:02:20.530 SO libspdk_conf.so.5.0 00:02:20.530 LIB libspdk_rdma.a 00:02:20.530 SYMLINK libspdk_conf.so 00:02:20.530 CC lib/env_dpdk/pci_ioat.o 00:02:20.788 SO libspdk_rdma.so.5.0 00:02:20.788 CC lib/env_dpdk/pci_virtio.o 00:02:20.788 SYMLINK libspdk_rdma.so 00:02:20.788 CC lib/env_dpdk/pci_vmd.o 00:02:20.788 CC lib/env_dpdk/pci_idxd.o 00:02:20.788 CC lib/env_dpdk/pci_event.o 00:02:20.788 LIB libspdk_json.a 00:02:20.788 CC lib/idxd/idxd_user.o 00:02:20.788 CC lib/idxd/idxd_kernel.o 00:02:20.788 CC lib/env_dpdk/sigbus_handler.o 00:02:20.788 SO libspdk_json.so.5.1 00:02:20.788 CC lib/env_dpdk/pci_dpdk.o 00:02:21.047 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.047 SYMLINK libspdk_json.so 00:02:21.047 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.047 LIB libspdk_vmd.a 00:02:21.047 SO libspdk_vmd.so.5.0 00:02:21.047 SYMLINK libspdk_vmd.so 00:02:21.047 LIB libspdk_idxd.a 00:02:21.047 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.047 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.047 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.047 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.047 SO libspdk_idxd.so.11.0 00:02:21.306 SYMLINK libspdk_idxd.so 00:02:21.565 LIB libspdk_jsonrpc.a 00:02:21.565 SO libspdk_jsonrpc.so.5.1 00:02:21.565 SYMLINK libspdk_jsonrpc.so 00:02:21.824 CC lib/rpc/rpc.o 00:02:21.824 LIB libspdk_env_dpdk.a 00:02:21.824 SO libspdk_env_dpdk.so.13.0 00:02:21.824 LIB libspdk_rpc.a 00:02:22.083 SO libspdk_rpc.so.5.0 00:02:22.083 SYMLINK libspdk_env_dpdk.so 00:02:22.083 SYMLINK libspdk_rpc.so 00:02:22.083 CC lib/sock/sock.o 00:02:22.083 CC lib/sock/sock_rpc.o 00:02:22.083 CC lib/notify/notify.o 00:02:22.083 CC lib/notify/notify_rpc.o 00:02:22.083 CC lib/trace/trace.o 00:02:22.083 CC lib/trace/trace_flags.o 00:02:22.083 CC lib/trace/trace_rpc.o 00:02:22.343 LIB libspdk_notify.a 00:02:22.343 SO libspdk_notify.so.5.0 00:02:22.343 SYMLINK libspdk_notify.so 00:02:22.343 LIB libspdk_trace.a 00:02:22.603 SO libspdk_trace.so.9.0 00:02:22.603 SYMLINK libspdk_trace.so 00:02:22.603 LIB libspdk_sock.a 00:02:22.603 SO libspdk_sock.so.8.0 00:02:22.603 CC lib/thread/thread.o 00:02:22.862 CC lib/thread/iobuf.o 00:02:22.862 SYMLINK libspdk_sock.so 00:02:22.862 CC lib/nvme/nvme_ctrlr.o 00:02:22.862 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:22.862 CC lib/nvme/nvme_fabric.o 00:02:22.862 CC lib/nvme/nvme_ns_cmd.o 00:02:22.862 CC lib/nvme/nvme_ns.o 00:02:22.862 CC lib/nvme/nvme_pcie_common.o 00:02:22.862 CC lib/nvme/nvme_pcie.o 00:02:22.862 CC lib/nvme/nvme_qpair.o 00:02:23.121 CC lib/nvme/nvme.o 00:02:23.688 CC lib/nvme/nvme_quirks.o 00:02:23.688 CC lib/nvme/nvme_transport.o 00:02:23.947 CC lib/nvme/nvme_discovery.o 00:02:23.947 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:23.947 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:23.947 CC lib/nvme/nvme_tcp.o 00:02:23.947 CC lib/nvme/nvme_opal.o 00:02:24.205 CC lib/nvme/nvme_io_msg.o 00:02:24.205 LIB libspdk_thread.a 00:02:24.464 CC lib/nvme/nvme_poll_group.o 00:02:24.464 SO libspdk_thread.so.9.0 00:02:24.464 CC lib/nvme/nvme_zns.o 00:02:24.464 SYMLINK libspdk_thread.so 00:02:24.464 CC lib/nvme/nvme_cuse.o 00:02:24.464 CC lib/nvme/nvme_vfio_user.o 00:02:24.464 CC lib/nvme/nvme_rdma.o 00:02:24.723 CC lib/blob/blobstore.o 00:02:24.723 CC lib/accel/accel.o 00:02:24.723 CC lib/accel/accel_rpc.o 00:02:24.981 CC lib/accel/accel_sw.o 00:02:24.981 CC lib/blob/request.o 00:02:25.239 CC lib/init/json_config.o 00:02:25.239 CC lib/virtio/virtio.o 00:02:25.239 CC lib/virtio/virtio_vhost_user.o 00:02:25.239 CC lib/virtio/virtio_vfio_user.o 00:02:25.239 CC lib/blob/zeroes.o 00:02:25.498 CC lib/blob/blob_bs_dev.o 00:02:25.498 CC lib/init/subsystem.o 00:02:25.498 CC lib/virtio/virtio_pci.o 00:02:25.498 CC lib/init/subsystem_rpc.o 00:02:25.498 CC lib/init/rpc.o 00:02:25.757 CC lib/vfu_tgt/tgt_endpoint.o 00:02:25.757 CC lib/vfu_tgt/tgt_rpc.o 00:02:25.757 LIB libspdk_init.a 00:02:25.757 SO libspdk_init.so.4.0 00:02:25.757 LIB libspdk_virtio.a 00:02:25.757 LIB libspdk_accel.a 00:02:25.757 SYMLINK libspdk_init.so 00:02:25.757 SO libspdk_accel.so.14.0 00:02:25.757 SO libspdk_virtio.so.6.0 00:02:25.757 LIB libspdk_nvme.a 00:02:25.757 SYMLINK libspdk_accel.so 00:02:25.757 SYMLINK libspdk_virtio.so 00:02:26.015 CC lib/event/app.o 00:02:26.015 CC lib/event/reactor.o 00:02:26.015 CC lib/event/app_rpc.o 00:02:26.015 CC lib/event/scheduler_static.o 00:02:26.015 CC lib/event/log_rpc.o 00:02:26.015 LIB libspdk_vfu_tgt.a 00:02:26.015 CC lib/bdev/bdev.o 00:02:26.015 CC lib/bdev/bdev_rpc.o 00:02:26.015 SO libspdk_vfu_tgt.so.2.0 00:02:26.015 SO libspdk_nvme.so.12.0 00:02:26.015 CC lib/bdev/bdev_zone.o 00:02:26.015 SYMLINK libspdk_vfu_tgt.so 00:02:26.015 CC lib/bdev/part.o 00:02:26.015 CC lib/bdev/scsi_nvme.o 00:02:26.273 SYMLINK libspdk_nvme.so 00:02:26.273 LIB libspdk_event.a 00:02:26.532 SO libspdk_event.so.12.0 00:02:26.532 SYMLINK libspdk_event.so 00:02:27.468 LIB libspdk_blob.a 00:02:27.468 SO libspdk_blob.so.10.1 00:02:27.726 SYMLINK libspdk_blob.so 00:02:27.726 CC lib/lvol/lvol.o 00:02:27.726 CC lib/blobfs/blobfs.o 00:02:27.726 CC lib/blobfs/tree.o 00:02:28.661 LIB libspdk_blobfs.a 00:02:28.661 LIB libspdk_bdev.a 00:02:28.661 LIB libspdk_lvol.a 00:02:28.662 SO libspdk_blobfs.so.9.0 00:02:28.662 SO libspdk_lvol.so.9.1 00:02:28.662 SO libspdk_bdev.so.14.0 00:02:28.662 SYMLINK libspdk_blobfs.so 00:02:28.662 SYMLINK libspdk_lvol.so 00:02:28.920 SYMLINK libspdk_bdev.so 00:02:28.920 CC lib/nvmf/ctrlr.o 00:02:28.920 CC lib/nvmf/ctrlr_discovery.o 00:02:28.920 CC lib/nvmf/ctrlr_bdev.o 00:02:28.920 CC lib/nvmf/subsystem.o 00:02:28.920 CC lib/nvmf/nvmf.o 00:02:28.920 CC lib/nvmf/nvmf_rpc.o 00:02:28.920 CC lib/scsi/dev.o 00:02:28.920 CC lib/ublk/ublk.o 00:02:28.920 CC lib/nbd/nbd.o 00:02:28.920 CC lib/ftl/ftl_core.o 00:02:29.178 CC lib/scsi/lun.o 00:02:29.436 CC lib/nbd/nbd_rpc.o 00:02:29.436 CC lib/ftl/ftl_init.o 00:02:29.436 CC lib/nvmf/transport.o 00:02:29.436 CC lib/scsi/port.o 00:02:29.747 LIB libspdk_nbd.a 00:02:29.747 CC lib/ublk/ublk_rpc.o 00:02:29.747 SO libspdk_nbd.so.6.0 00:02:29.747 CC lib/ftl/ftl_layout.o 00:02:29.747 CC lib/nvmf/tcp.o 00:02:29.747 CC lib/scsi/scsi.o 00:02:29.747 SYMLINK libspdk_nbd.so 00:02:29.747 CC lib/scsi/scsi_bdev.o 00:02:29.747 LIB libspdk_ublk.a 00:02:29.747 CC lib/nvmf/vfio_user.o 00:02:29.747 SO libspdk_ublk.so.2.0 00:02:30.032 SYMLINK libspdk_ublk.so 00:02:30.032 CC lib/nvmf/rdma.o 00:02:30.032 CC lib/ftl/ftl_debug.o 00:02:30.032 CC lib/ftl/ftl_io.o 00:02:30.032 CC lib/scsi/scsi_pr.o 00:02:30.032 CC lib/scsi/scsi_rpc.o 00:02:30.032 CC lib/scsi/task.o 00:02:30.290 CC lib/ftl/ftl_sb.o 00:02:30.290 CC lib/ftl/ftl_l2p.o 00:02:30.290 CC lib/ftl/ftl_l2p_flat.o 00:02:30.290 CC lib/ftl/ftl_nv_cache.o 00:02:30.290 CC lib/ftl/ftl_band.o 00:02:30.290 LIB libspdk_scsi.a 00:02:30.290 CC lib/ftl/ftl_band_ops.o 00:02:30.290 CC lib/ftl/ftl_writer.o 00:02:30.290 CC lib/ftl/ftl_rq.o 00:02:30.290 SO libspdk_scsi.so.8.0 00:02:30.547 SYMLINK libspdk_scsi.so 00:02:30.547 CC lib/ftl/ftl_reloc.o 00:02:30.547 CC lib/ftl/ftl_l2p_cache.o 00:02:30.547 CC lib/ftl/ftl_p2l.o 00:02:30.805 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.805 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.805 CC lib/iscsi/conn.o 00:02:30.805 CC lib/iscsi/init_grp.o 00:02:31.063 CC lib/iscsi/iscsi.o 00:02:31.063 CC lib/iscsi/md5.o 00:02:31.063 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:31.063 CC lib/iscsi/param.o 00:02:31.063 CC lib/iscsi/portal_grp.o 00:02:31.063 CC lib/iscsi/tgt_node.o 00:02:31.321 CC lib/iscsi/iscsi_subsystem.o 00:02:31.321 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:31.321 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:31.321 CC lib/iscsi/iscsi_rpc.o 00:02:31.321 CC lib/iscsi/task.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:31.579 CC lib/vhost/vhost.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:31.579 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:31.837 CC lib/vhost/vhost_rpc.o 00:02:31.837 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.837 CC lib/ftl/utils/ftl_conf.o 00:02:31.837 CC lib/ftl/utils/ftl_md.o 00:02:31.837 CC lib/vhost/vhost_scsi.o 00:02:31.837 CC lib/ftl/utils/ftl_mempool.o 00:02:31.837 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.095 CC lib/ftl/utils/ftl_property.o 00:02:32.095 LIB libspdk_nvmf.a 00:02:32.095 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.095 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.095 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.095 SO libspdk_nvmf.so.17.0 00:02:32.353 CC lib/vhost/vhost_blk.o 00:02:32.353 CC lib/vhost/rte_vhost_user.o 00:02:32.353 LIB libspdk_iscsi.a 00:02:32.353 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.353 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.353 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.353 SYMLINK libspdk_nvmf.so 00:02:32.353 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.353 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.353 SO libspdk_iscsi.so.7.0 00:02:32.353 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.611 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.611 CC lib/ftl/base/ftl_base_dev.o 00:02:32.611 CC lib/ftl/base/ftl_base_bdev.o 00:02:32.611 SYMLINK libspdk_iscsi.so 00:02:32.611 CC lib/ftl/ftl_trace.o 00:02:32.869 LIB libspdk_ftl.a 00:02:33.128 SO libspdk_ftl.so.8.0 00:02:33.386 SYMLINK libspdk_ftl.so 00:02:33.386 LIB libspdk_vhost.a 00:02:33.386 SO libspdk_vhost.so.7.1 00:02:33.645 SYMLINK libspdk_vhost.so 00:02:33.645 CC module/vfu_device/vfu_virtio.o 00:02:33.645 CC module/env_dpdk/env_dpdk_rpc.o 00:02:33.903 CC module/accel/dsa/accel_dsa.o 00:02:33.903 CC module/sock/uring/uring.o 00:02:33.903 CC module/sock/posix/posix.o 00:02:33.903 CC module/accel/iaa/accel_iaa.o 00:02:33.903 CC module/accel/error/accel_error.o 00:02:33.903 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:33.903 CC module/accel/ioat/accel_ioat.o 00:02:33.903 CC module/blob/bdev/blob_bdev.o 00:02:33.903 LIB libspdk_env_dpdk_rpc.a 00:02:33.903 SO libspdk_env_dpdk_rpc.so.5.0 00:02:33.903 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.903 CC module/accel/dsa/accel_dsa_rpc.o 00:02:33.903 CC module/accel/error/accel_error_rpc.o 00:02:33.903 LIB libspdk_scheduler_dynamic.a 00:02:33.903 CC module/accel/ioat/accel_ioat_rpc.o 00:02:33.903 CC module/accel/iaa/accel_iaa_rpc.o 00:02:34.161 SO libspdk_scheduler_dynamic.so.3.0 00:02:34.161 SYMLINK libspdk_scheduler_dynamic.so 00:02:34.161 LIB libspdk_blob_bdev.a 00:02:34.161 SO libspdk_blob_bdev.so.10.1 00:02:34.161 LIB libspdk_accel_dsa.a 00:02:34.161 LIB libspdk_accel_error.a 00:02:34.161 LIB libspdk_accel_iaa.a 00:02:34.161 LIB libspdk_accel_ioat.a 00:02:34.161 SO libspdk_accel_dsa.so.4.0 00:02:34.161 SO libspdk_accel_error.so.1.0 00:02:34.161 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:34.161 SO libspdk_accel_iaa.so.2.0 00:02:34.161 SO libspdk_accel_ioat.so.5.0 00:02:34.161 SYMLINK libspdk_blob_bdev.so 00:02:34.161 CC module/scheduler/gscheduler/gscheduler.o 00:02:34.161 SYMLINK libspdk_accel_dsa.so 00:02:34.161 SYMLINK libspdk_accel_ioat.so 00:02:34.161 SYMLINK libspdk_accel_error.so 00:02:34.161 SYMLINK libspdk_accel_iaa.so 00:02:34.161 CC module/vfu_device/vfu_virtio_blk.o 00:02:34.161 CC module/vfu_device/vfu_virtio_scsi.o 00:02:34.161 CC module/vfu_device/vfu_virtio_rpc.o 00:02:34.418 LIB libspdk_scheduler_dpdk_governor.a 00:02:34.418 LIB libspdk_scheduler_gscheduler.a 00:02:34.418 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:34.419 SO libspdk_scheduler_gscheduler.so.3.0 00:02:34.419 CC module/blobfs/bdev/blobfs_bdev.o 00:02:34.419 CC module/bdev/delay/vbdev_delay.o 00:02:34.419 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:34.419 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:34.419 SYMLINK libspdk_scheduler_gscheduler.so 00:02:34.419 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:34.419 LIB libspdk_sock_uring.a 00:02:34.419 CC module/bdev/error/vbdev_error.o 00:02:34.419 LIB libspdk_sock_posix.a 00:02:34.419 SO libspdk_sock_uring.so.4.0 00:02:34.676 SO libspdk_sock_posix.so.5.0 00:02:34.676 SYMLINK libspdk_sock_uring.so 00:02:34.676 LIB libspdk_vfu_device.a 00:02:34.676 CC module/bdev/gpt/gpt.o 00:02:34.676 CC module/bdev/error/vbdev_error_rpc.o 00:02:34.676 SYMLINK libspdk_sock_posix.so 00:02:34.676 CC module/bdev/gpt/vbdev_gpt.o 00:02:34.676 SO libspdk_vfu_device.so.2.0 00:02:34.676 LIB libspdk_blobfs_bdev.a 00:02:34.676 SO libspdk_blobfs_bdev.so.5.0 00:02:34.676 CC module/bdev/lvol/vbdev_lvol.o 00:02:34.676 CC module/bdev/null/bdev_null.o 00:02:34.676 CC module/bdev/malloc/bdev_malloc.o 00:02:34.676 SYMLINK libspdk_vfu_device.so 00:02:34.676 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:34.676 SYMLINK libspdk_blobfs_bdev.so 00:02:34.676 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:34.934 LIB libspdk_bdev_delay.a 00:02:34.934 LIB libspdk_bdev_error.a 00:02:34.934 CC module/bdev/null/bdev_null_rpc.o 00:02:34.934 SO libspdk_bdev_error.so.5.0 00:02:34.934 SO libspdk_bdev_delay.so.5.0 00:02:34.934 CC module/bdev/nvme/bdev_nvme.o 00:02:34.934 SYMLINK libspdk_bdev_delay.so 00:02:34.934 SYMLINK libspdk_bdev_error.so 00:02:34.934 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:34.934 LIB libspdk_bdev_gpt.a 00:02:34.934 SO libspdk_bdev_gpt.so.5.0 00:02:34.934 CC module/bdev/nvme/nvme_rpc.o 00:02:34.934 LIB libspdk_bdev_null.a 00:02:34.934 CC module/bdev/passthru/vbdev_passthru.o 00:02:34.934 CC module/bdev/raid/bdev_raid.o 00:02:34.934 SYMLINK libspdk_bdev_gpt.so 00:02:35.192 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.192 SO libspdk_bdev_null.so.5.0 00:02:35.192 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.192 LIB libspdk_bdev_malloc.a 00:02:35.192 SYMLINK libspdk_bdev_null.so 00:02:35.192 SO libspdk_bdev_malloc.so.5.0 00:02:35.192 LIB libspdk_bdev_lvol.a 00:02:35.192 SYMLINK libspdk_bdev_malloc.so 00:02:35.192 CC module/bdev/split/vbdev_split.o 00:02:35.192 SO libspdk_bdev_lvol.so.5.0 00:02:35.192 CC module/bdev/split/vbdev_split_rpc.o 00:02:35.192 SYMLINK libspdk_bdev_lvol.so 00:02:35.450 CC module/bdev/raid/raid0.o 00:02:35.450 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.450 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.450 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:35.450 CC module/bdev/uring/bdev_uring.o 00:02:35.450 CC module/bdev/raid/raid1.o 00:02:35.450 LIB libspdk_bdev_split.a 00:02:35.450 SO libspdk_bdev_split.so.5.0 00:02:35.450 LIB libspdk_bdev_passthru.a 00:02:35.450 CC module/bdev/nvme/bdev_mdns_client.o 00:02:35.450 CC module/bdev/uring/bdev_uring_rpc.o 00:02:35.450 SO libspdk_bdev_passthru.so.5.0 00:02:35.450 SYMLINK libspdk_bdev_split.so 00:02:35.450 CC module/bdev/raid/concat.o 00:02:35.708 CC module/bdev/nvme/vbdev_opal.o 00:02:35.708 SYMLINK libspdk_bdev_passthru.so 00:02:35.708 LIB libspdk_bdev_zone_block.a 00:02:35.708 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:35.708 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:35.708 SO libspdk_bdev_zone_block.so.5.0 00:02:35.708 CC module/bdev/aio/bdev_aio.o 00:02:35.708 LIB libspdk_bdev_uring.a 00:02:35.708 CC module/bdev/aio/bdev_aio_rpc.o 00:02:35.708 SO libspdk_bdev_uring.so.5.0 00:02:35.708 CC module/bdev/ftl/bdev_ftl.o 00:02:35.708 SYMLINK libspdk_bdev_zone_block.so 00:02:35.966 SYMLINK libspdk_bdev_uring.so 00:02:35.966 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:35.966 LIB libspdk_bdev_raid.a 00:02:35.966 CC module/bdev/iscsi/bdev_iscsi.o 00:02:35.966 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:35.966 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.966 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:35.966 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:35.966 SO libspdk_bdev_raid.so.5.0 00:02:35.966 LIB libspdk_bdev_aio.a 00:02:36.227 SO libspdk_bdev_aio.so.5.0 00:02:36.227 SYMLINK libspdk_bdev_raid.so 00:02:36.227 LIB libspdk_bdev_ftl.a 00:02:36.227 SO libspdk_bdev_ftl.so.5.0 00:02:36.227 SYMLINK libspdk_bdev_aio.so 00:02:36.227 SYMLINK libspdk_bdev_ftl.so 00:02:36.227 LIB libspdk_bdev_iscsi.a 00:02:36.487 SO libspdk_bdev_iscsi.so.5.0 00:02:36.487 SYMLINK libspdk_bdev_iscsi.so 00:02:36.487 LIB libspdk_bdev_virtio.a 00:02:36.487 SO libspdk_bdev_virtio.so.5.0 00:02:36.487 SYMLINK libspdk_bdev_virtio.so 00:02:37.054 LIB libspdk_bdev_nvme.a 00:02:37.312 SO libspdk_bdev_nvme.so.6.0 00:02:37.312 SYMLINK libspdk_bdev_nvme.so 00:02:37.570 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.570 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.570 CC module/event/subsystems/vmd/vmd.o 00:02:37.570 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.570 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.570 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.570 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.570 CC module/event/subsystems/sock/sock.o 00:02:37.829 LIB libspdk_event_sock.a 00:02:37.829 LIB libspdk_event_vhost_blk.a 00:02:37.829 LIB libspdk_event_scheduler.a 00:02:37.829 SO libspdk_event_sock.so.4.0 00:02:37.829 LIB libspdk_event_iobuf.a 00:02:37.829 LIB libspdk_event_vmd.a 00:02:37.829 SO libspdk_event_vhost_blk.so.2.0 00:02:37.829 SO libspdk_event_scheduler.so.3.0 00:02:37.829 LIB libspdk_event_vfu_tgt.a 00:02:37.829 SO libspdk_event_vmd.so.5.0 00:02:37.829 SO libspdk_event_iobuf.so.2.0 00:02:37.829 SO libspdk_event_vfu_tgt.so.2.0 00:02:37.829 SYMLINK libspdk_event_vhost_blk.so 00:02:37.829 SYMLINK libspdk_event_sock.so 00:02:37.829 SYMLINK libspdk_event_scheduler.so 00:02:37.829 SYMLINK libspdk_event_vfu_tgt.so 00:02:38.089 SYMLINK libspdk_event_vmd.so 00:02:38.089 SYMLINK libspdk_event_iobuf.so 00:02:38.089 CC module/event/subsystems/accel/accel.o 00:02:38.349 LIB libspdk_event_accel.a 00:02:38.349 SO libspdk_event_accel.so.5.0 00:02:38.349 SYMLINK libspdk_event_accel.so 00:02:38.609 CC module/event/subsystems/bdev/bdev.o 00:02:38.868 LIB libspdk_event_bdev.a 00:02:38.868 SO libspdk_event_bdev.so.5.0 00:02:38.868 SYMLINK libspdk_event_bdev.so 00:02:38.868 CC module/event/subsystems/ublk/ublk.o 00:02:38.868 CC module/event/subsystems/nbd/nbd.o 00:02:39.127 CC module/event/subsystems/scsi/scsi.o 00:02:39.127 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.127 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.127 LIB libspdk_event_nbd.a 00:02:39.127 LIB libspdk_event_scsi.a 00:02:39.127 LIB libspdk_event_ublk.a 00:02:39.127 SO libspdk_event_scsi.so.5.0 00:02:39.127 SO libspdk_event_nbd.so.5.0 00:02:39.127 SO libspdk_event_ublk.so.2.0 00:02:39.386 SYMLINK libspdk_event_nbd.so 00:02:39.386 SYMLINK libspdk_event_scsi.so 00:02:39.386 LIB libspdk_event_nvmf.a 00:02:39.386 SYMLINK libspdk_event_ublk.so 00:02:39.386 SO libspdk_event_nvmf.so.5.0 00:02:39.386 SYMLINK libspdk_event_nvmf.so 00:02:39.386 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.386 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.644 LIB libspdk_event_vhost_scsi.a 00:02:39.644 SO libspdk_event_vhost_scsi.so.2.0 00:02:39.644 LIB libspdk_event_iscsi.a 00:02:39.645 SO libspdk_event_iscsi.so.5.0 00:02:39.645 SYMLINK libspdk_event_vhost_scsi.so 00:02:39.645 SYMLINK libspdk_event_iscsi.so 00:02:39.903 SO libspdk.so.5.0 00:02:39.903 SYMLINK libspdk.so 00:02:39.903 CC app/trace_record/trace_record.o 00:02:39.903 CC app/spdk_lspci/spdk_lspci.o 00:02:39.903 CXX app/trace/trace.o 00:02:40.162 CC app/nvmf_tgt/nvmf_main.o 00:02:40.162 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.162 CC app/spdk_tgt/spdk_tgt.o 00:02:40.162 CC examples/accel/perf/accel_perf.o 00:02:40.162 CC test/accel/dif/dif.o 00:02:40.162 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.162 CC examples/blob/hello_world/hello_blob.o 00:02:40.162 LINK spdk_lspci 00:02:40.162 LINK spdk_trace_record 00:02:40.420 LINK nvmf_tgt 00:02:40.420 LINK spdk_tgt 00:02:40.420 LINK iscsi_tgt 00:02:40.420 LINK hello_bdev 00:02:40.420 LINK hello_blob 00:02:40.420 LINK spdk_trace 00:02:40.420 CC examples/blob/cli/blobcli.o 00:02:40.420 CC app/spdk_nvme_perf/perf.o 00:02:40.420 LINK dif 00:02:40.678 LINK accel_perf 00:02:40.678 CC test/app/bdev_svc/bdev_svc.o 00:02:40.678 CC test/bdev/bdevio/bdevio.o 00:02:40.678 TEST_HEADER include/spdk/accel.h 00:02:40.678 TEST_HEADER include/spdk/accel_module.h 00:02:40.678 TEST_HEADER include/spdk/assert.h 00:02:40.678 TEST_HEADER include/spdk/barrier.h 00:02:40.678 TEST_HEADER include/spdk/base64.h 00:02:40.678 TEST_HEADER include/spdk/bdev.h 00:02:40.678 TEST_HEADER include/spdk/bdev_module.h 00:02:40.678 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.678 TEST_HEADER include/spdk/bit_array.h 00:02:40.678 TEST_HEADER include/spdk/bit_pool.h 00:02:40.678 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.678 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.678 TEST_HEADER include/spdk/blobfs.h 00:02:40.678 TEST_HEADER include/spdk/blob.h 00:02:40.678 TEST_HEADER include/spdk/conf.h 00:02:40.678 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.678 TEST_HEADER include/spdk/config.h 00:02:40.678 TEST_HEADER include/spdk/cpuset.h 00:02:40.678 TEST_HEADER include/spdk/crc16.h 00:02:40.678 TEST_HEADER include/spdk/crc32.h 00:02:40.678 TEST_HEADER include/spdk/crc64.h 00:02:40.678 TEST_HEADER include/spdk/dif.h 00:02:40.678 TEST_HEADER include/spdk/dma.h 00:02:40.678 TEST_HEADER include/spdk/endian.h 00:02:40.678 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.678 TEST_HEADER include/spdk/env.h 00:02:40.678 TEST_HEADER include/spdk/event.h 00:02:40.678 TEST_HEADER include/spdk/fd_group.h 00:02:40.678 TEST_HEADER include/spdk/fd.h 00:02:40.678 TEST_HEADER include/spdk/file.h 00:02:40.678 TEST_HEADER include/spdk/ftl.h 00:02:40.678 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.678 TEST_HEADER include/spdk/hexlify.h 00:02:40.678 TEST_HEADER include/spdk/histogram_data.h 00:02:40.678 CC test/blobfs/mkfs/mkfs.o 00:02:40.678 TEST_HEADER include/spdk/idxd.h 00:02:40.678 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.678 TEST_HEADER include/spdk/init.h 00:02:40.678 TEST_HEADER include/spdk/ioat.h 00:02:40.678 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.678 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.678 TEST_HEADER include/spdk/json.h 00:02:40.678 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.678 TEST_HEADER include/spdk/likely.h 00:02:40.678 TEST_HEADER include/spdk/log.h 00:02:40.678 TEST_HEADER include/spdk/lvol.h 00:02:40.678 TEST_HEADER include/spdk/memory.h 00:02:40.678 TEST_HEADER include/spdk/mmio.h 00:02:40.678 TEST_HEADER include/spdk/nbd.h 00:02:40.678 TEST_HEADER include/spdk/notify.h 00:02:40.941 CC test/dma/test_dma/test_dma.o 00:02:40.941 TEST_HEADER include/spdk/nvme.h 00:02:40.941 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.941 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.941 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.941 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.941 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.941 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.941 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.941 TEST_HEADER include/spdk/nvmf.h 00:02:40.941 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.941 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.941 TEST_HEADER include/spdk/opal.h 00:02:40.941 TEST_HEADER include/spdk/opal_spec.h 00:02:40.941 TEST_HEADER include/spdk/pci_ids.h 00:02:40.941 TEST_HEADER include/spdk/pipe.h 00:02:40.941 TEST_HEADER include/spdk/queue.h 00:02:40.941 TEST_HEADER include/spdk/reduce.h 00:02:40.941 TEST_HEADER include/spdk/rpc.h 00:02:40.941 TEST_HEADER include/spdk/scheduler.h 00:02:40.941 TEST_HEADER include/spdk/scsi.h 00:02:40.941 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.941 TEST_HEADER include/spdk/sock.h 00:02:40.941 TEST_HEADER include/spdk/stdinc.h 00:02:40.941 TEST_HEADER include/spdk/string.h 00:02:40.941 TEST_HEADER include/spdk/thread.h 00:02:40.941 LINK bdev_svc 00:02:40.941 TEST_HEADER include/spdk/trace.h 00:02:40.941 TEST_HEADER include/spdk/trace_parser.h 00:02:40.941 TEST_HEADER include/spdk/tree.h 00:02:40.941 TEST_HEADER include/spdk/ublk.h 00:02:40.941 TEST_HEADER include/spdk/util.h 00:02:40.941 TEST_HEADER include/spdk/uuid.h 00:02:40.941 TEST_HEADER include/spdk/version.h 00:02:40.941 CC examples/ioat/perf/perf.o 00:02:40.941 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.941 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.941 TEST_HEADER include/spdk/vhost.h 00:02:40.941 TEST_HEADER include/spdk/vmd.h 00:02:40.941 TEST_HEADER include/spdk/xor.h 00:02:40.941 TEST_HEADER include/spdk/zipf.h 00:02:40.941 CXX test/cpp_headers/accel.o 00:02:40.941 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.941 LINK mkfs 00:02:40.941 LINK blobcli 00:02:41.201 CXX test/cpp_headers/accel_module.o 00:02:41.201 LINK bdevio 00:02:41.201 LINK ioat_perf 00:02:41.201 LINK test_dma 00:02:41.201 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:41.201 CXX test/cpp_headers/assert.o 00:02:41.201 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:41.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:41.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.201 CC examples/ioat/verify/verify.o 00:02:41.458 LINK spdk_nvme_perf 00:02:41.458 CXX test/cpp_headers/barrier.o 00:02:41.458 LINK bdevperf 00:02:41.458 LINK mem_callbacks 00:02:41.458 LINK verify 00:02:41.459 CC test/event/event_perf/event_perf.o 00:02:41.716 CXX test/cpp_headers/base64.o 00:02:41.716 CC app/spdk_nvme_identify/identify.o 00:02:41.716 LINK nvme_fuzz 00:02:41.716 CC test/lvol/esnap/esnap.o 00:02:41.716 LINK event_perf 00:02:41.716 LINK vhost_fuzz 00:02:41.716 CC test/env/vtophys/vtophys.o 00:02:41.716 CXX test/cpp_headers/bdev.o 00:02:41.716 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:41.974 CC examples/nvme/hello_world/hello_world.o 00:02:41.974 CC examples/nvme/reconnect/reconnect.o 00:02:41.974 LINK vtophys 00:02:41.974 LINK env_dpdk_post_init 00:02:41.974 CC test/event/reactor/reactor.o 00:02:41.974 CXX test/cpp_headers/bdev_module.o 00:02:41.974 CC examples/sock/hello_world/hello_sock.o 00:02:42.231 LINK reactor 00:02:42.231 LINK hello_world 00:02:42.231 CXX test/cpp_headers/bdev_zone.o 00:02:42.231 CC test/env/memory/memory_ut.o 00:02:42.231 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.231 LINK reconnect 00:02:42.231 LINK hello_sock 00:02:42.231 CC test/event/reactor_perf/reactor_perf.o 00:02:42.231 LINK lsvmd 00:02:42.489 CC examples/vmd/led/led.o 00:02:42.489 LINK spdk_nvme_identify 00:02:42.489 CXX test/cpp_headers/bit_array.o 00:02:42.489 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.489 LINK reactor_perf 00:02:42.489 CC examples/nvme/arbitration/arbitration.o 00:02:42.489 LINK led 00:02:42.489 CXX test/cpp_headers/bit_pool.o 00:02:42.489 CC test/event/app_repeat/app_repeat.o 00:02:42.747 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.747 CC examples/nvme/hotplug/hotplug.o 00:02:42.747 LINK app_repeat 00:02:42.747 CXX test/cpp_headers/blob_bdev.o 00:02:42.747 CC test/app/histogram_perf/histogram_perf.o 00:02:43.005 LINK arbitration 00:02:43.005 LINK spdk_nvme_discover 00:02:43.005 LINK iscsi_fuzz 00:02:43.005 LINK histogram_perf 00:02:43.005 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.005 LINK hotplug 00:02:43.005 LINK nvme_manage 00:02:43.005 CC test/event/scheduler/scheduler.o 00:02:43.005 CC app/spdk_top/spdk_top.o 00:02:43.005 LINK memory_ut 00:02:43.263 CXX test/cpp_headers/blobfs.o 00:02:43.263 CC app/vhost/vhost.o 00:02:43.263 CC app/spdk_dd/spdk_dd.o 00:02:43.263 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.263 CC test/app/jsoncat/jsoncat.o 00:02:43.263 CC app/fio/nvme/fio_plugin.o 00:02:43.263 LINK scheduler 00:02:43.263 CXX test/cpp_headers/blob.o 00:02:43.263 LINK vhost 00:02:43.263 CC test/env/pci/pci_ut.o 00:02:43.521 LINK jsoncat 00:02:43.521 LINK cmb_copy 00:02:43.521 CXX test/cpp_headers/conf.o 00:02:43.521 LINK spdk_dd 00:02:43.521 CC app/fio/bdev/fio_plugin.o 00:02:43.521 CC test/app/stub/stub.o 00:02:43.521 CXX test/cpp_headers/config.o 00:02:43.779 CC examples/nvme/abort/abort.o 00:02:43.779 CXX test/cpp_headers/cpuset.o 00:02:43.779 LINK pci_ut 00:02:43.779 CC test/nvme/aer/aer.o 00:02:43.779 LINK stub 00:02:43.779 LINK spdk_nvme 00:02:43.779 CC test/nvme/reset/reset.o 00:02:43.779 CXX test/cpp_headers/crc16.o 00:02:44.037 LINK spdk_top 00:02:44.037 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:44.037 CXX test/cpp_headers/crc32.o 00:02:44.037 LINK abort 00:02:44.037 CC test/nvme/sgl/sgl.o 00:02:44.037 CC test/rpc_client/rpc_client_test.o 00:02:44.037 LINK aer 00:02:44.037 LINK reset 00:02:44.037 LINK spdk_bdev 00:02:44.037 CXX test/cpp_headers/crc64.o 00:02:44.295 LINK pmr_persistence 00:02:44.295 CXX test/cpp_headers/dif.o 00:02:44.295 CXX test/cpp_headers/dma.o 00:02:44.295 LINK rpc_client_test 00:02:44.295 CXX test/cpp_headers/endian.o 00:02:44.295 CXX test/cpp_headers/env_dpdk.o 00:02:44.295 LINK sgl 00:02:44.295 CXX test/cpp_headers/env.o 00:02:44.295 CC examples/nvmf/nvmf/nvmf.o 00:02:44.295 CC test/thread/poller_perf/poller_perf.o 00:02:44.553 CXX test/cpp_headers/event.o 00:02:44.553 CXX test/cpp_headers/fd_group.o 00:02:44.553 CXX test/cpp_headers/fd.o 00:02:44.553 CC test/nvme/e2edp/nvme_dp.o 00:02:44.553 CC test/nvme/overhead/overhead.o 00:02:44.553 CC test/nvme/err_injection/err_injection.o 00:02:44.553 LINK poller_perf 00:02:44.553 CC examples/util/zipf/zipf.o 00:02:44.553 CXX test/cpp_headers/file.o 00:02:44.553 CC test/nvme/startup/startup.o 00:02:44.553 LINK nvmf 00:02:44.812 CC test/nvme/reserve/reserve.o 00:02:44.812 LINK nvme_dp 00:02:44.812 LINK zipf 00:02:44.812 LINK err_injection 00:02:44.812 CXX test/cpp_headers/ftl.o 00:02:44.812 LINK startup 00:02:44.812 CC examples/thread/thread/thread_ex.o 00:02:44.812 LINK overhead 00:02:44.812 CXX test/cpp_headers/gpt_spec.o 00:02:44.812 CXX test/cpp_headers/hexlify.o 00:02:44.812 LINK reserve 00:02:45.069 CXX test/cpp_headers/histogram_data.o 00:02:45.069 CC test/nvme/simple_copy/simple_copy.o 00:02:45.069 CXX test/cpp_headers/idxd.o 00:02:45.069 CXX test/cpp_headers/idxd_spec.o 00:02:45.069 CXX test/cpp_headers/init.o 00:02:45.069 CC test/nvme/connect_stress/connect_stress.o 00:02:45.069 CXX test/cpp_headers/ioat.o 00:02:45.069 CXX test/cpp_headers/ioat_spec.o 00:02:45.069 CXX test/cpp_headers/iscsi_spec.o 00:02:45.069 LINK thread 00:02:45.328 CXX test/cpp_headers/json.o 00:02:45.328 LINK connect_stress 00:02:45.328 LINK simple_copy 00:02:45.328 CC test/nvme/boot_partition/boot_partition.o 00:02:45.328 CC test/nvme/compliance/nvme_compliance.o 00:02:45.328 CC test/nvme/fused_ordering/fused_ordering.o 00:02:45.328 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:45.328 CC examples/idxd/perf/perf.o 00:02:45.328 CXX test/cpp_headers/jsonrpc.o 00:02:45.586 CXX test/cpp_headers/likely.o 00:02:45.586 CXX test/cpp_headers/log.o 00:02:45.586 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:45.586 LINK boot_partition 00:02:45.586 LINK doorbell_aers 00:02:45.586 LINK fused_ordering 00:02:45.586 LINK nvme_compliance 00:02:45.586 CXX test/cpp_headers/lvol.o 00:02:45.586 CXX test/cpp_headers/memory.o 00:02:45.586 LINK interrupt_tgt 00:02:45.586 CC test/nvme/fdp/fdp.o 00:02:45.586 CC test/nvme/cuse/cuse.o 00:02:45.586 CXX test/cpp_headers/mmio.o 00:02:45.844 CXX test/cpp_headers/nbd.o 00:02:45.844 CXX test/cpp_headers/notify.o 00:02:45.844 LINK idxd_perf 00:02:45.844 CXX test/cpp_headers/nvme.o 00:02:45.844 CXX test/cpp_headers/nvme_intel.o 00:02:45.844 CXX test/cpp_headers/nvme_ocssd.o 00:02:45.844 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:45.844 CXX test/cpp_headers/nvme_spec.o 00:02:45.844 CXX test/cpp_headers/nvme_zns.o 00:02:45.844 CXX test/cpp_headers/nvmf_cmd.o 00:02:45.844 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.102 LINK fdp 00:02:46.102 CXX test/cpp_headers/nvmf.o 00:02:46.102 CXX test/cpp_headers/nvmf_spec.o 00:02:46.103 CXX test/cpp_headers/nvmf_transport.o 00:02:46.103 CXX test/cpp_headers/opal.o 00:02:46.103 CXX test/cpp_headers/opal_spec.o 00:02:46.103 CXX test/cpp_headers/pci_ids.o 00:02:46.103 CXX test/cpp_headers/pipe.o 00:02:46.103 CXX test/cpp_headers/queue.o 00:02:46.103 CXX test/cpp_headers/reduce.o 00:02:46.103 CXX test/cpp_headers/rpc.o 00:02:46.103 CXX test/cpp_headers/scheduler.o 00:02:46.398 CXX test/cpp_headers/scsi.o 00:02:46.398 CXX test/cpp_headers/scsi_spec.o 00:02:46.398 CXX test/cpp_headers/sock.o 00:02:46.398 CXX test/cpp_headers/stdinc.o 00:02:46.398 CXX test/cpp_headers/string.o 00:02:46.398 CXX test/cpp_headers/thread.o 00:02:46.398 CXX test/cpp_headers/trace.o 00:02:46.398 CXX test/cpp_headers/trace_parser.o 00:02:46.398 CXX test/cpp_headers/tree.o 00:02:46.398 CXX test/cpp_headers/ublk.o 00:02:46.398 CXX test/cpp_headers/util.o 00:02:46.398 CXX test/cpp_headers/uuid.o 00:02:46.398 LINK esnap 00:02:46.398 CXX test/cpp_headers/version.o 00:02:46.398 CXX test/cpp_headers/vfio_user_pci.o 00:02:46.398 CXX test/cpp_headers/vfio_user_spec.o 00:02:46.656 CXX test/cpp_headers/vhost.o 00:02:46.656 CXX test/cpp_headers/vmd.o 00:02:46.656 CXX test/cpp_headers/xor.o 00:02:46.656 CXX test/cpp_headers/zipf.o 00:02:46.656 LINK cuse 00:02:46.914 00:02:46.914 real 0m58.992s 00:02:46.914 user 6m27.098s 00:02:46.914 sys 1m21.776s 00:02:46.914 15:01:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:46.914 15:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.914 ************************************ 00:02:46.914 END TEST make 00:02:46.914 ************************************ 00:02:47.173 15:01:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:47.173 15:01:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:47.173 15:01:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:47.173 15:01:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:47.173 15:01:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:47.173 15:01:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:47.173 15:01:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:47.173 15:01:16 -- scripts/common.sh@335 -- # IFS=.-: 00:02:47.173 15:01:16 -- scripts/common.sh@335 -- # read -ra ver1 00:02:47.173 15:01:16 -- scripts/common.sh@336 -- # IFS=.-: 00:02:47.173 15:01:16 -- scripts/common.sh@336 -- # read -ra ver2 00:02:47.173 15:01:16 -- scripts/common.sh@337 -- # local 'op=<' 00:02:47.173 15:01:16 -- scripts/common.sh@339 -- # ver1_l=2 00:02:47.173 15:01:16 -- scripts/common.sh@340 -- # ver2_l=1 00:02:47.173 15:01:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:47.173 15:01:16 -- scripts/common.sh@343 -- # case "$op" in 00:02:47.173 15:01:16 -- scripts/common.sh@344 -- # : 1 00:02:47.173 15:01:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:47.173 15:01:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.173 15:01:16 -- scripts/common.sh@364 -- # decimal 1 00:02:47.173 15:01:16 -- scripts/common.sh@352 -- # local d=1 00:02:47.173 15:01:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:47.173 15:01:16 -- scripts/common.sh@354 -- # echo 1 00:02:47.173 15:01:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:47.173 15:01:16 -- scripts/common.sh@365 -- # decimal 2 00:02:47.173 15:01:16 -- scripts/common.sh@352 -- # local d=2 00:02:47.173 15:01:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:47.173 15:01:16 -- scripts/common.sh@354 -- # echo 2 00:02:47.173 15:01:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:47.173 15:01:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:47.173 15:01:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:47.173 15:01:16 -- scripts/common.sh@367 -- # return 0 00:02:47.173 15:01:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:47.173 15:01:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.173 --rc genhtml_branch_coverage=1 00:02:47.173 --rc genhtml_function_coverage=1 00:02:47.173 --rc genhtml_legend=1 00:02:47.173 --rc geninfo_all_blocks=1 00:02:47.173 --rc geninfo_unexecuted_blocks=1 00:02:47.173 00:02:47.173 ' 00:02:47.173 15:01:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.173 --rc genhtml_branch_coverage=1 00:02:47.173 --rc genhtml_function_coverage=1 00:02:47.173 --rc genhtml_legend=1 00:02:47.173 --rc geninfo_all_blocks=1 00:02:47.173 --rc geninfo_unexecuted_blocks=1 00:02:47.173 00:02:47.173 ' 00:02:47.173 15:01:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.173 --rc genhtml_branch_coverage=1 00:02:47.173 --rc genhtml_function_coverage=1 00:02:47.173 --rc genhtml_legend=1 00:02:47.173 --rc geninfo_all_blocks=1 00:02:47.173 --rc geninfo_unexecuted_blocks=1 00:02:47.173 00:02:47.173 ' 00:02:47.173 15:01:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.173 --rc genhtml_branch_coverage=1 00:02:47.173 --rc genhtml_function_coverage=1 00:02:47.173 --rc genhtml_legend=1 00:02:47.173 --rc geninfo_all_blocks=1 00:02:47.173 --rc geninfo_unexecuted_blocks=1 00:02:47.173 00:02:47.173 ' 00:02:47.173 15:01:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:47.173 15:01:16 -- nvmf/common.sh@7 -- # uname -s 00:02:47.173 15:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.173 15:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.173 15:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.173 15:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.173 15:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.173 15:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.173 15:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.173 15:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.173 15:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.173 15:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.173 15:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:02:47.173 15:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:02:47.173 15:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.173 15:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.173 15:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:47.173 15:01:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:47.173 15:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.173 15:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.173 15:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.173 15:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.173 15:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.173 15:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.173 15:01:16 -- paths/export.sh@5 -- # export PATH 00:02:47.173 15:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.173 15:01:16 -- nvmf/common.sh@46 -- # : 0 00:02:47.173 15:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:47.173 15:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:47.173 15:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:47.173 15:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.173 15:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.173 15:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:47.173 15:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:47.173 15:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:47.173 15:01:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.173 15:01:16 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.173 15:01:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.173 15:01:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.173 15:01:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:47.173 15:01:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.173 15:01:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:47.173 15:01:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.173 15:01:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.173 15:01:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.173 15:01:16 -- spdk/autotest.sh@48 -- # udevadm_pid=48042 00:02:47.173 15:01:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.173 15:01:16 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:47.173 15:01:16 -- spdk/autotest.sh@54 -- # echo 48045 00:02:47.173 15:01:16 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:47.173 15:01:16 -- spdk/autotest.sh@56 -- # echo 48048 00:02:47.173 15:01:16 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:47.173 15:01:16 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:47.173 15:01:16 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.173 15:01:16 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:47.173 15:01:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:47.173 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:02:47.173 15:01:16 -- spdk/autotest.sh@70 -- # create_test_list 00:02:47.173 15:01:16 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:47.173 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:02:47.432 15:01:16 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:47.432 15:01:16 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:47.432 15:01:16 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:47.432 15:01:16 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:47.432 15:01:16 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:47.432 15:01:16 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:47.432 15:01:16 -- common/autotest_common.sh@1450 -- # uname 00:02:47.432 15:01:16 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:47.432 15:01:16 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:47.432 15:01:16 -- common/autotest_common.sh@1470 -- # uname 00:02:47.432 15:01:16 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:47.432 15:01:16 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:47.432 15:01:16 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:47.432 lcov: LCOV version 1.15 00:02:47.432 15:01:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:55.544 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:55.544 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:55.544 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:17.483 15:01:46 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:17.483 15:01:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:17.483 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:03:17.483 15:01:46 -- spdk/autotest.sh@89 -- # rm -f 00:03:17.483 15:01:46 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:17.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:17.483 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:17.744 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:17.744 15:01:46 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:17.744 15:01:46 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:17.744 15:01:46 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:17.744 15:01:46 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:17.744 15:01:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.744 15:01:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:17.744 15:01:46 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:17.744 15:01:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.744 15:01:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:17.744 15:01:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:17.744 15:01:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.744 15:01:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:17.744 15:01:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:17.744 15:01:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.744 15:01:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:17.744 15:01:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:17.744 15:01:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:17.744 15:01:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.744 15:01:46 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:17.744 15:01:46 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:17.744 15:01:46 -- spdk/autotest.sh@108 -- # grep -v p 00:03:17.744 15:01:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.744 15:01:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.744 15:01:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:17.744 15:01:46 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:17.744 15:01:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:17.744 No valid GPT data, bailing 00:03:17.744 15:01:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.744 15:01:46 -- scripts/common.sh@393 -- # pt= 00:03:17.744 15:01:46 -- scripts/common.sh@394 -- # return 1 00:03:17.744 15:01:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:17.744 1+0 records in 00:03:17.744 1+0 records out 00:03:17.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433827 s, 242 MB/s 00:03:17.744 15:01:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.744 15:01:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.744 15:01:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:17.744 15:01:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:17.744 15:01:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:17.744 No valid GPT data, bailing 00:03:17.744 15:01:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:17.744 15:01:46 -- scripts/common.sh@393 -- # pt= 00:03:17.744 15:01:46 -- scripts/common.sh@394 -- # return 1 00:03:17.744 15:01:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:17.744 1+0 records in 00:03:17.744 1+0 records out 00:03:17.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062426 s, 168 MB/s 00:03:17.744 15:01:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.744 15:01:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.744 15:01:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:03:17.744 15:01:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:17.744 15:01:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:18.003 No valid GPT data, bailing 00:03:18.003 15:01:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:18.003 15:01:47 -- scripts/common.sh@393 -- # pt= 00:03:18.003 15:01:47 -- scripts/common.sh@394 -- # return 1 00:03:18.003 15:01:47 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:18.003 1+0 records in 00:03:18.003 1+0 records out 00:03:18.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391886 s, 268 MB/s 00:03:18.003 15:01:47 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.003 15:01:47 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.003 15:01:47 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:03:18.003 15:01:47 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:18.003 15:01:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:18.003 No valid GPT data, bailing 00:03:18.003 15:01:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:18.003 15:01:47 -- scripts/common.sh@393 -- # pt= 00:03:18.003 15:01:47 -- scripts/common.sh@394 -- # return 1 00:03:18.003 15:01:47 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:18.003 1+0 records in 00:03:18.003 1+0 records out 00:03:18.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038328 s, 274 MB/s 00:03:18.003 15:01:47 -- spdk/autotest.sh@116 -- # sync 00:03:18.263 15:01:47 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:18.263 15:01:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:18.263 15:01:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:20.170 15:01:49 -- spdk/autotest.sh@122 -- # uname -s 00:03:20.170 15:01:49 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:20.170 15:01:49 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:20.170 15:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.170 15:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.170 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:03:20.170 ************************************ 00:03:20.170 START TEST setup.sh 00:03:20.170 ************************************ 00:03:20.170 15:01:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:20.170 * Looking for test storage... 00:03:20.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:20.170 15:01:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:20.170 15:01:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:20.170 15:01:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:20.170 15:01:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:20.170 15:01:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:20.170 15:01:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:20.170 15:01:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:20.170 15:01:49 -- scripts/common.sh@335 -- # IFS=.-: 00:03:20.170 15:01:49 -- scripts/common.sh@335 -- # read -ra ver1 00:03:20.170 15:01:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.170 15:01:49 -- scripts/common.sh@336 -- # read -ra ver2 00:03:20.170 15:01:49 -- scripts/common.sh@337 -- # local 'op=<' 00:03:20.170 15:01:49 -- scripts/common.sh@339 -- # ver1_l=2 00:03:20.170 15:01:49 -- scripts/common.sh@340 -- # ver2_l=1 00:03:20.170 15:01:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:20.170 15:01:49 -- scripts/common.sh@343 -- # case "$op" in 00:03:20.170 15:01:49 -- scripts/common.sh@344 -- # : 1 00:03:20.170 15:01:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:20.170 15:01:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.170 15:01:49 -- scripts/common.sh@364 -- # decimal 1 00:03:20.170 15:01:49 -- scripts/common.sh@352 -- # local d=1 00:03:20.170 15:01:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.170 15:01:49 -- scripts/common.sh@354 -- # echo 1 00:03:20.170 15:01:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:20.170 15:01:49 -- scripts/common.sh@365 -- # decimal 2 00:03:20.170 15:01:49 -- scripts/common.sh@352 -- # local d=2 00:03:20.170 15:01:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.170 15:01:49 -- scripts/common.sh@354 -- # echo 2 00:03:20.171 15:01:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:20.171 15:01:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:20.171 15:01:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:20.171 15:01:49 -- scripts/common.sh@367 -- # return 0 00:03:20.171 15:01:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.171 15:01:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:20.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.171 --rc genhtml_branch_coverage=1 00:03:20.171 --rc genhtml_function_coverage=1 00:03:20.171 --rc genhtml_legend=1 00:03:20.171 --rc geninfo_all_blocks=1 00:03:20.171 --rc geninfo_unexecuted_blocks=1 00:03:20.171 00:03:20.171 ' 00:03:20.171 15:01:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:20.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.171 --rc genhtml_branch_coverage=1 00:03:20.171 --rc genhtml_function_coverage=1 00:03:20.171 --rc genhtml_legend=1 00:03:20.171 --rc geninfo_all_blocks=1 00:03:20.171 --rc geninfo_unexecuted_blocks=1 00:03:20.171 00:03:20.171 ' 00:03:20.171 15:01:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:20.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.171 --rc genhtml_branch_coverage=1 00:03:20.171 --rc genhtml_function_coverage=1 00:03:20.171 --rc genhtml_legend=1 00:03:20.171 --rc geninfo_all_blocks=1 00:03:20.171 --rc geninfo_unexecuted_blocks=1 00:03:20.171 00:03:20.171 ' 00:03:20.171 15:01:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:20.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.171 --rc genhtml_branch_coverage=1 00:03:20.171 --rc genhtml_function_coverage=1 00:03:20.171 --rc genhtml_legend=1 00:03:20.171 --rc geninfo_all_blocks=1 00:03:20.171 --rc geninfo_unexecuted_blocks=1 00:03:20.171 00:03:20.171 ' 00:03:20.171 15:01:49 -- setup/test-setup.sh@10 -- # uname -s 00:03:20.171 15:01:49 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:20.171 15:01:49 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:20.171 15:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.171 15:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.171 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:03:20.171 ************************************ 00:03:20.171 START TEST acl 00:03:20.171 ************************************ 00:03:20.171 15:01:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:20.431 * Looking for test storage... 00:03:20.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:20.431 15:01:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:20.431 15:01:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:20.431 15:01:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:20.431 15:01:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:20.431 15:01:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:20.431 15:01:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:20.431 15:01:49 -- scripts/common.sh@335 -- # IFS=.-: 00:03:20.431 15:01:49 -- scripts/common.sh@335 -- # read -ra ver1 00:03:20.431 15:01:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.431 15:01:49 -- scripts/common.sh@336 -- # read -ra ver2 00:03:20.431 15:01:49 -- scripts/common.sh@337 -- # local 'op=<' 00:03:20.431 15:01:49 -- scripts/common.sh@339 -- # ver1_l=2 00:03:20.431 15:01:49 -- scripts/common.sh@340 -- # ver2_l=1 00:03:20.431 15:01:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:20.431 15:01:49 -- scripts/common.sh@343 -- # case "$op" in 00:03:20.431 15:01:49 -- scripts/common.sh@344 -- # : 1 00:03:20.431 15:01:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:20.431 15:01:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.431 15:01:49 -- scripts/common.sh@364 -- # decimal 1 00:03:20.431 15:01:49 -- scripts/common.sh@352 -- # local d=1 00:03:20.431 15:01:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.431 15:01:49 -- scripts/common.sh@354 -- # echo 1 00:03:20.431 15:01:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:20.431 15:01:49 -- scripts/common.sh@365 -- # decimal 2 00:03:20.431 15:01:49 -- scripts/common.sh@352 -- # local d=2 00:03:20.431 15:01:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.431 15:01:49 -- scripts/common.sh@354 -- # echo 2 00:03:20.431 15:01:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:20.431 15:01:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:20.431 15:01:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:20.431 15:01:49 -- scripts/common.sh@367 -- # return 0 00:03:20.431 15:01:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.431 15:01:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.431 --rc genhtml_branch_coverage=1 00:03:20.431 --rc genhtml_function_coverage=1 00:03:20.431 --rc genhtml_legend=1 00:03:20.431 --rc geninfo_all_blocks=1 00:03:20.431 --rc geninfo_unexecuted_blocks=1 00:03:20.431 00:03:20.431 ' 00:03:20.431 15:01:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.431 --rc genhtml_branch_coverage=1 00:03:20.431 --rc genhtml_function_coverage=1 00:03:20.431 --rc genhtml_legend=1 00:03:20.431 --rc geninfo_all_blocks=1 00:03:20.431 --rc geninfo_unexecuted_blocks=1 00:03:20.431 00:03:20.431 ' 00:03:20.431 15:01:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.431 --rc genhtml_branch_coverage=1 00:03:20.431 --rc genhtml_function_coverage=1 00:03:20.431 --rc genhtml_legend=1 00:03:20.431 --rc geninfo_all_blocks=1 00:03:20.431 --rc geninfo_unexecuted_blocks=1 00:03:20.431 00:03:20.431 ' 00:03:20.431 15:01:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.431 --rc genhtml_branch_coverage=1 00:03:20.431 --rc genhtml_function_coverage=1 00:03:20.431 --rc genhtml_legend=1 00:03:20.431 --rc geninfo_all_blocks=1 00:03:20.431 --rc geninfo_unexecuted_blocks=1 00:03:20.431 00:03:20.431 ' 00:03:20.431 15:01:49 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:20.431 15:01:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:20.431 15:01:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:20.431 15:01:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:20.431 15:01:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:20.431 15:01:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:20.431 15:01:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:20.431 15:01:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:20.431 15:01:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:20.431 15:01:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:20.431 15:01:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:20.431 15:01:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:20.431 15:01:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:20.431 15:01:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:20.431 15:01:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:20.431 15:01:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:20.431 15:01:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:20.431 15:01:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:20.431 15:01:49 -- setup/acl.sh@12 -- # devs=() 00:03:20.431 15:01:49 -- setup/acl.sh@12 -- # declare -a devs 00:03:20.431 15:01:49 -- setup/acl.sh@13 -- # drivers=() 00:03:20.431 15:01:49 -- setup/acl.sh@13 -- # declare -A drivers 00:03:20.431 15:01:49 -- setup/acl.sh@51 -- # setup reset 00:03:20.431 15:01:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.431 15:01:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:20.999 15:01:50 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:20.999 15:01:50 -- setup/acl.sh@16 -- # local dev driver 00:03:20.999 15:01:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.999 15:01:50 -- setup/acl.sh@15 -- # setup output status 00:03:20.999 15:01:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.999 15:01:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:21.259 Hugepages 00:03:21.259 node hugesize free / total 00:03:21.259 15:01:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.259 15:01:50 -- setup/acl.sh@19 -- # continue 00:03:21.259 15:01:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.259 00:03:21.259 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.259 15:01:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.259 15:01:50 -- setup/acl.sh@19 -- # continue 00:03:21.259 15:01:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.259 15:01:50 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:21.259 15:01:50 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:21.259 15:01:50 -- setup/acl.sh@20 -- # continue 00:03:21.259 15:01:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.519 15:01:50 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:21.519 15:01:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.519 15:01:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:21.519 15:01:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.519 15:01:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.519 15:01:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.519 15:01:50 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:21.519 15:01:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.519 15:01:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:21.519 15:01:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.519 15:01:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.519 15:01:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.519 15:01:50 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:21.519 15:01:50 -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.519 15:01:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.519 15:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.519 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:03:21.519 ************************************ 00:03:21.519 START TEST denied 00:03:21.519 ************************************ 00:03:21.519 15:01:50 -- common/autotest_common.sh@1114 -- # denied 00:03:21.519 15:01:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:21.519 15:01:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:21.519 15:01:50 -- setup/acl.sh@38 -- # setup output config 00:03:21.519 15:01:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.519 15:01:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:22.456 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:22.456 15:01:51 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:22.456 15:01:51 -- setup/acl.sh@28 -- # local dev driver 00:03:22.456 15:01:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:22.456 15:01:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:22.456 15:01:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:22.456 15:01:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:22.456 15:01:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:22.456 15:01:51 -- setup/acl.sh@41 -- # setup reset 00:03:22.456 15:01:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.456 15:01:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:23.025 00:03:23.025 real 0m1.465s 00:03:23.025 user 0m0.572s 00:03:23.025 sys 0m0.840s 00:03:23.025 15:01:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:23.025 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:03:23.025 ************************************ 00:03:23.025 END TEST denied 00:03:23.025 ************************************ 00:03:23.025 15:01:52 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:23.025 15:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.025 15:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.025 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:03:23.025 ************************************ 00:03:23.025 START TEST allowed 00:03:23.025 ************************************ 00:03:23.025 15:01:52 -- common/autotest_common.sh@1114 -- # allowed 00:03:23.025 15:01:52 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:23.025 15:01:52 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:23.025 15:01:52 -- setup/acl.sh@45 -- # setup output config 00:03:23.025 15:01:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.025 15:01:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:23.964 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:23.964 15:01:52 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:23.964 15:01:52 -- setup/acl.sh@28 -- # local dev driver 00:03:23.964 15:01:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:23.964 15:01:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:23.964 15:01:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:23.964 15:01:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:23.964 15:01:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:23.964 15:01:52 -- setup/acl.sh@48 -- # setup reset 00:03:23.964 15:01:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.964 15:01:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:24.533 00:03:24.533 real 0m1.458s 00:03:24.533 user 0m0.668s 00:03:24.533 sys 0m0.799s 00:03:24.533 15:01:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.533 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:24.533 ************************************ 00:03:24.533 END TEST allowed 00:03:24.533 ************************************ 00:03:24.533 00:03:24.533 real 0m4.253s 00:03:24.533 user 0m1.858s 00:03:24.533 sys 0m2.378s 00:03:24.533 15:01:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.533 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:24.533 ************************************ 00:03:24.533 END TEST acl 00:03:24.533 ************************************ 00:03:24.533 15:01:53 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:24.533 15:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.533 15:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.533 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:24.533 ************************************ 00:03:24.533 START TEST hugepages 00:03:24.533 ************************************ 00:03:24.533 15:01:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:24.533 * Looking for test storage... 00:03:24.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:24.533 15:01:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:24.533 15:01:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:24.533 15:01:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:24.794 15:01:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:24.794 15:01:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:24.794 15:01:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:24.794 15:01:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:24.794 15:01:53 -- scripts/common.sh@335 -- # IFS=.-: 00:03:24.794 15:01:53 -- scripts/common.sh@335 -- # read -ra ver1 00:03:24.794 15:01:53 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.794 15:01:53 -- scripts/common.sh@336 -- # read -ra ver2 00:03:24.794 15:01:53 -- scripts/common.sh@337 -- # local 'op=<' 00:03:24.794 15:01:53 -- scripts/common.sh@339 -- # ver1_l=2 00:03:24.794 15:01:53 -- scripts/common.sh@340 -- # ver2_l=1 00:03:24.794 15:01:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:24.794 15:01:53 -- scripts/common.sh@343 -- # case "$op" in 00:03:24.794 15:01:53 -- scripts/common.sh@344 -- # : 1 00:03:24.794 15:01:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:24.794 15:01:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.794 15:01:53 -- scripts/common.sh@364 -- # decimal 1 00:03:24.794 15:01:53 -- scripts/common.sh@352 -- # local d=1 00:03:24.794 15:01:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.794 15:01:53 -- scripts/common.sh@354 -- # echo 1 00:03:24.794 15:01:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:24.794 15:01:53 -- scripts/common.sh@365 -- # decimal 2 00:03:24.794 15:01:53 -- scripts/common.sh@352 -- # local d=2 00:03:24.794 15:01:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.794 15:01:53 -- scripts/common.sh@354 -- # echo 2 00:03:24.794 15:01:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:24.794 15:01:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:24.794 15:01:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:24.794 15:01:53 -- scripts/common.sh@367 -- # return 0 00:03:24.794 15:01:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.794 15:01:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:24.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.794 --rc genhtml_branch_coverage=1 00:03:24.794 --rc genhtml_function_coverage=1 00:03:24.794 --rc genhtml_legend=1 00:03:24.794 --rc geninfo_all_blocks=1 00:03:24.794 --rc geninfo_unexecuted_blocks=1 00:03:24.794 00:03:24.794 ' 00:03:24.794 15:01:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:24.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.794 --rc genhtml_branch_coverage=1 00:03:24.795 --rc genhtml_function_coverage=1 00:03:24.795 --rc genhtml_legend=1 00:03:24.795 --rc geninfo_all_blocks=1 00:03:24.795 --rc geninfo_unexecuted_blocks=1 00:03:24.795 00:03:24.795 ' 00:03:24.795 15:01:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:24.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.795 --rc genhtml_branch_coverage=1 00:03:24.795 --rc genhtml_function_coverage=1 00:03:24.795 --rc genhtml_legend=1 00:03:24.795 --rc geninfo_all_blocks=1 00:03:24.795 --rc geninfo_unexecuted_blocks=1 00:03:24.795 00:03:24.795 ' 00:03:24.795 15:01:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:24.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.795 --rc genhtml_branch_coverage=1 00:03:24.795 --rc genhtml_function_coverage=1 00:03:24.795 --rc genhtml_legend=1 00:03:24.795 --rc geninfo_all_blocks=1 00:03:24.795 --rc geninfo_unexecuted_blocks=1 00:03:24.795 00:03:24.795 ' 00:03:24.795 15:01:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:24.795 15:01:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:24.795 15:01:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:24.795 15:01:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:24.795 15:01:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:24.795 15:01:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:24.795 15:01:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:24.795 15:01:53 -- setup/common.sh@18 -- # local node= 00:03:24.795 15:01:53 -- setup/common.sh@19 -- # local var val 00:03:24.795 15:01:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.795 15:01:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.795 15:01:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.795 15:01:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.795 15:01:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.795 15:01:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 5984696 kB' 'MemAvailable: 7365164 kB' 'Buffers: 3704 kB' 'Cached: 1592952 kB' 'SwapCached: 0 kB' 'Active: 455284 kB' 'Inactive: 1258316 kB' 'Active(anon): 127452 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118556 kB' 'Mapped: 51160 kB' 'Shmem: 10508 kB' 'KReclaimable: 62536 kB' 'Slab: 155364 kB' 'SReclaimable: 62536 kB' 'SUnreclaim: 92828 kB' 'KernelStack: 6480 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 321200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.795 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.795 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # continue 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.796 15:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.796 15:01:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.796 15:01:53 -- setup/common.sh@33 -- # echo 2048 00:03:24.796 15:01:53 -- setup/common.sh@33 -- # return 0 00:03:24.796 15:01:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:24.796 15:01:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:24.796 15:01:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:24.796 15:01:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:24.796 15:01:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:24.796 15:01:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:24.796 15:01:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:24.796 15:01:53 -- setup/hugepages.sh@207 -- # get_nodes 00:03:24.796 15:01:53 -- setup/hugepages.sh@27 -- # local node 00:03:24.796 15:01:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.796 15:01:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:24.796 15:01:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:24.796 15:01:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.796 15:01:53 -- setup/hugepages.sh@208 -- # clear_hp 00:03:24.796 15:01:53 -- setup/hugepages.sh@37 -- # local node hp 00:03:24.796 15:01:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.796 15:01:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.796 15:01:53 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.797 15:01:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.797 15:01:53 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.797 15:01:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.797 15:01:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.797 15:01:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:24.797 15:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.797 15:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.797 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:24.797 ************************************ 00:03:24.797 START TEST default_setup 00:03:24.797 ************************************ 00:03:24.797 15:01:53 -- common/autotest_common.sh@1114 -- # default_setup 00:03:24.797 15:01:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:24.797 15:01:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.797 15:01:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.797 15:01:53 -- setup/hugepages.sh@51 -- # shift 00:03:24.797 15:01:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.797 15:01:53 -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.797 15:01:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.797 15:01:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.797 15:01:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.797 15:01:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.797 15:01:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.797 15:01:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.797 15:01:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:24.797 15:01:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.797 15:01:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.797 15:01:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.797 15:01:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.797 15:01:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.797 15:01:53 -- setup/hugepages.sh@73 -- # return 0 00:03:24.797 15:01:53 -- setup/hugepages.sh@137 -- # setup output 00:03:24.797 15:01:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.797 15:01:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:25.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:25.632 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:25.632 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:25.632 15:01:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:25.632 15:01:54 -- setup/hugepages.sh@89 -- # local node 00:03:25.632 15:01:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.632 15:01:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.632 15:01:54 -- setup/hugepages.sh@92 -- # local surp 00:03:25.632 15:01:54 -- setup/hugepages.sh@93 -- # local resv 00:03:25.632 15:01:54 -- setup/hugepages.sh@94 -- # local anon 00:03:25.632 15:01:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.632 15:01:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.632 15:01:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.632 15:01:54 -- setup/common.sh@18 -- # local node= 00:03:25.632 15:01:54 -- setup/common.sh@19 -- # local var val 00:03:25.632 15:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.632 15:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.632 15:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.632 15:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.632 15:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.632 15:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8085188 kB' 'MemAvailable: 9465468 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456820 kB' 'Inactive: 1258320 kB' 'Active(anon): 128988 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120388 kB' 'Mapped: 50860 kB' 'Shmem: 10488 kB' 'KReclaimable: 62148 kB' 'Slab: 154964 kB' 'SReclaimable: 62148 kB' 'SUnreclaim: 92816 kB' 'KernelStack: 6448 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.632 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.632 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.633 15:01:54 -- setup/common.sh@33 -- # echo 0 00:03:25.633 15:01:54 -- setup/common.sh@33 -- # return 0 00:03:25.633 15:01:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.633 15:01:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.633 15:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.633 15:01:54 -- setup/common.sh@18 -- # local node= 00:03:25.633 15:01:54 -- setup/common.sh@19 -- # local var val 00:03:25.633 15:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.633 15:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.633 15:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.633 15:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.633 15:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.633 15:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8084936 kB' 'MemAvailable: 9465212 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456700 kB' 'Inactive: 1258320 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154960 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92816 kB' 'KernelStack: 6432 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.633 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.633 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.634 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.634 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.635 15:01:54 -- setup/common.sh@33 -- # echo 0 00:03:25.635 15:01:54 -- setup/common.sh@33 -- # return 0 00:03:25.635 15:01:54 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.635 15:01:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.635 15:01:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.635 15:01:54 -- setup/common.sh@18 -- # local node= 00:03:25.635 15:01:54 -- setup/common.sh@19 -- # local var val 00:03:25.635 15:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.635 15:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.635 15:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.635 15:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.635 15:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.635 15:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8085636 kB' 'MemAvailable: 9465912 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 1258320 kB' 'Active(anon): 128824 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154956 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92812 kB' 'KernelStack: 6416 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.635 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.635 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.636 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.636 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.636 15:01:54 -- setup/common.sh@33 -- # echo 0 00:03:25.636 15:01:54 -- setup/common.sh@33 -- # return 0 00:03:25.636 15:01:54 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.636 15:01:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.636 nr_hugepages=1024 00:03:25.636 resv_hugepages=0 00:03:25.636 surplus_hugepages=0 00:03:25.636 anon_hugepages=0 00:03:25.636 15:01:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.636 15:01:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.636 15:01:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.636 15:01:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.636 15:01:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.636 15:01:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.636 15:01:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.636 15:01:54 -- setup/common.sh@18 -- # local node= 00:03:25.636 15:01:54 -- setup/common.sh@19 -- # local var val 00:03:25.637 15:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.637 15:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.637 15:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.637 15:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.637 15:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.637 15:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8085636 kB' 'MemAvailable: 9465920 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456548 kB' 'Inactive: 1258328 kB' 'Active(anon): 128716 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154956 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92812 kB' 'KernelStack: 6448 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.637 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.637 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.638 15:01:54 -- setup/common.sh@33 -- # echo 1024 00:03:25.638 15:01:54 -- setup/common.sh@33 -- # return 0 00:03:25.638 15:01:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.638 15:01:54 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.638 15:01:54 -- setup/hugepages.sh@27 -- # local node 00:03:25.638 15:01:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.638 15:01:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.638 15:01:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:25.638 15:01:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.638 15:01:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.638 15:01:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.638 15:01:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.638 15:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.638 15:01:54 -- setup/common.sh@18 -- # local node=0 00:03:25.638 15:01:54 -- setup/common.sh@19 -- # local var val 00:03:25.638 15:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.638 15:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.638 15:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.638 15:01:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.638 15:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.638 15:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.638 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.638 15:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8086676 kB' 'MemUsed: 4152444 kB' 'SwapCached: 0 kB' 'Active: 456568 kB' 'Inactive: 1258328 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1596648 kB' 'Mapped: 50732 kB' 'AnonPages: 119880 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62144 kB' 'Slab: 154956 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.638 15:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.639 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.639 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # continue 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.945 15:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.945 15:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.945 15:01:54 -- setup/common.sh@33 -- # echo 0 00:03:25.945 15:01:54 -- setup/common.sh@33 -- # return 0 00:03:25.945 15:01:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.945 15:01:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.945 15:01:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.945 15:01:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.945 node0=1024 expecting 1024 00:03:25.945 15:01:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:25.945 15:01:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:25.945 ************************************ 00:03:25.945 END TEST default_setup 00:03:25.945 ************************************ 00:03:25.945 00:03:25.945 real 0m0.987s 00:03:25.945 user 0m0.488s 00:03:25.945 sys 0m0.460s 00:03:25.945 15:01:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:25.945 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:03:25.945 15:01:54 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:25.945 15:01:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.945 15:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.945 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:03:25.945 ************************************ 00:03:25.945 START TEST per_node_1G_alloc 00:03:25.945 ************************************ 00:03:25.945 15:01:54 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:25.945 15:01:54 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:25.945 15:01:54 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:25.945 15:01:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.945 15:01:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.945 15:01:54 -- setup/hugepages.sh@51 -- # shift 00:03:25.945 15:01:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.945 15:01:54 -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.945 15:01:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.946 15:01:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.946 15:01:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.946 15:01:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.946 15:01:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.946 15:01:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.946 15:01:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:25.946 15:01:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.946 15:01:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.946 15:01:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.946 15:01:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.946 15:01:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:25.946 15:01:54 -- setup/hugepages.sh@73 -- # return 0 00:03:25.946 15:01:54 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:25.946 15:01:54 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:25.946 15:01:54 -- setup/hugepages.sh@146 -- # setup output 00:03:25.946 15:01:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.946 15:01:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:26.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.216 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.216 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.216 15:01:55 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:26.216 15:01:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.216 15:01:55 -- setup/hugepages.sh@89 -- # local node 00:03:26.216 15:01:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.216 15:01:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.216 15:01:55 -- setup/hugepages.sh@92 -- # local surp 00:03:26.216 15:01:55 -- setup/hugepages.sh@93 -- # local resv 00:03:26.216 15:01:55 -- setup/hugepages.sh@94 -- # local anon 00:03:26.216 15:01:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.216 15:01:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.216 15:01:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.216 15:01:55 -- setup/common.sh@18 -- # local node= 00:03:26.216 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.216 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.216 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.216 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.216 15:01:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.216 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.216 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9142512 kB' 'MemAvailable: 10522800 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456848 kB' 'Inactive: 1258332 kB' 'Active(anon): 129016 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120188 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154932 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92788 kB' 'KernelStack: 6416 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.216 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.216 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.217 15:01:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.217 15:01:55 -- setup/common.sh@33 -- # echo 0 00:03:26.217 15:01:55 -- setup/common.sh@33 -- # return 0 00:03:26.217 15:01:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.217 15:01:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.217 15:01:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.217 15:01:55 -- setup/common.sh@18 -- # local node= 00:03:26.217 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.217 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.217 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.217 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.217 15:01:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.217 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.217 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.217 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9142524 kB' 'MemAvailable: 10522812 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456336 kB' 'Inactive: 1258332 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154948 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92804 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.218 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.218 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.219 15:01:55 -- setup/common.sh@33 -- # echo 0 00:03:26.219 15:01:55 -- setup/common.sh@33 -- # return 0 00:03:26.219 15:01:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.219 15:01:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.219 15:01:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.219 15:01:55 -- setup/common.sh@18 -- # local node= 00:03:26.219 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.219 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.219 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.219 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.219 15:01:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.219 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.219 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9142524 kB' 'MemAvailable: 10522812 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456572 kB' 'Inactive: 1258332 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154948 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92804 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.219 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.219 15:01:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.220 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.220 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.221 15:01:55 -- setup/common.sh@33 -- # echo 0 00:03:26.221 15:01:55 -- setup/common.sh@33 -- # return 0 00:03:26.221 15:01:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.221 nr_hugepages=512 00:03:26.221 resv_hugepages=0 00:03:26.221 surplus_hugepages=0 00:03:26.221 anon_hugepages=0 00:03:26.221 15:01:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:26.221 15:01:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.221 15:01:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.221 15:01:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.221 15:01:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:26.221 15:01:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:26.221 15:01:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.221 15:01:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.221 15:01:55 -- setup/common.sh@18 -- # local node= 00:03:26.221 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.221 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.221 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.221 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.221 15:01:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.221 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.221 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9143264 kB' 'MemAvailable: 10523552 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456604 kB' 'Inactive: 1258332 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154948 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92804 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.221 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.221 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.222 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.222 15:01:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.223 15:01:55 -- setup/common.sh@33 -- # echo 512 00:03:26.223 15:01:55 -- setup/common.sh@33 -- # return 0 00:03:26.223 15:01:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:26.223 15:01:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.223 15:01:55 -- setup/hugepages.sh@27 -- # local node 00:03:26.223 15:01:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.223 15:01:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.223 15:01:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:26.223 15:01:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.223 15:01:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.223 15:01:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.223 15:01:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.223 15:01:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.223 15:01:55 -- setup/common.sh@18 -- # local node=0 00:03:26.223 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.223 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.223 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.223 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.223 15:01:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.223 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.223 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.223 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9143264 kB' 'MemUsed: 3095856 kB' 'SwapCached: 0 kB' 'Active: 456580 kB' 'Inactive: 1258332 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1596648 kB' 'Mapped: 50732 kB' 'AnonPages: 119832 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62144 kB' 'Slab: 154940 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.223 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.223 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.482 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.482 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.483 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.483 15:01:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.483 15:01:55 -- setup/common.sh@33 -- # echo 0 00:03:26.483 15:01:55 -- setup/common.sh@33 -- # return 0 00:03:26.483 15:01:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.483 15:01:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.483 15:01:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.483 node0=512 expecting 512 00:03:26.483 ************************************ 00:03:26.483 END TEST per_node_1G_alloc 00:03:26.483 ************************************ 00:03:26.483 15:01:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.483 15:01:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.483 00:03:26.483 real 0m0.550s 00:03:26.483 user 0m0.266s 00:03:26.483 sys 0m0.287s 00:03:26.483 15:01:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:26.483 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:03:26.483 15:01:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.483 15:01:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.483 15:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.483 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:03:26.483 ************************************ 00:03:26.483 START TEST even_2G_alloc 00:03:26.483 ************************************ 00:03:26.483 15:01:55 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:26.483 15:01:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.483 15:01:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.483 15:01:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.483 15:01:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.483 15:01:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.483 15:01:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.483 15:01:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.483 15:01:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:26.483 15:01:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.483 15:01:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.483 15:01:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:26.483 15:01:55 -- setup/hugepages.sh@83 -- # : 0 00:03:26.483 15:01:55 -- setup/hugepages.sh@84 -- # : 0 00:03:26.483 15:01:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.483 15:01:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.483 15:01:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.483 15:01:55 -- setup/hugepages.sh@153 -- # setup output 00:03:26.483 15:01:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.483 15:01:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:26.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.744 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.744 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.744 15:01:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:26.744 15:01:55 -- setup/hugepages.sh@89 -- # local node 00:03:26.744 15:01:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.744 15:01:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.744 15:01:55 -- setup/hugepages.sh@92 -- # local surp 00:03:26.744 15:01:55 -- setup/hugepages.sh@93 -- # local resv 00:03:26.744 15:01:55 -- setup/hugepages.sh@94 -- # local anon 00:03:26.744 15:01:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.744 15:01:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.744 15:01:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.744 15:01:55 -- setup/common.sh@18 -- # local node= 00:03:26.744 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.744 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.744 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.744 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.744 15:01:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.744 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.744 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.744 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8096052 kB' 'MemAvailable: 9476340 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 457004 kB' 'Inactive: 1258332 kB' 'Active(anon): 129172 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120300 kB' 'Mapped: 50896 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154912 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92768 kB' 'KernelStack: 6424 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.744 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.744 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.745 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.745 15:01:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.746 15:01:55 -- setup/common.sh@33 -- # echo 0 00:03:26.746 15:01:55 -- setup/common.sh@33 -- # return 0 00:03:26.746 15:01:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.746 15:01:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.746 15:01:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.746 15:01:55 -- setup/common.sh@18 -- # local node= 00:03:26.746 15:01:55 -- setup/common.sh@19 -- # local var val 00:03:26.746 15:01:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.746 15:01:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.746 15:01:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.746 15:01:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.746 15:01:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.746 15:01:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095548 kB' 'MemAvailable: 9475836 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 1258332 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154932 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92788 kB' 'KernelStack: 6448 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:55 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.746 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.746 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.747 15:01:56 -- setup/common.sh@32 -- # continue 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.747 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.009 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.009 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.009 15:01:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.009 15:01:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.009 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.009 15:01:56 -- setup/common.sh@18 -- # local node= 00:03:27.009 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.009 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.009 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.009 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.009 15:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.009 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.009 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.009 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095548 kB' 'MemAvailable: 9475836 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 1258332 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154932 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92788 kB' 'KernelStack: 6448 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:27.009 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.009 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.010 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.010 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.011 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.011 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.011 15:01:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.011 nr_hugepages=1024 00:03:27.011 resv_hugepages=0 00:03:27.011 surplus_hugepages=0 00:03:27.011 anon_hugepages=0 00:03:27.011 15:01:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.011 15:01:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.011 15:01:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.011 15:01:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.011 15:01:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.011 15:01:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.011 15:01:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.011 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.011 15:01:56 -- setup/common.sh@18 -- # local node= 00:03:27.011 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.011 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.011 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.011 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.011 15:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.011 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.011 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095548 kB' 'MemAvailable: 9475836 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456560 kB' 'Inactive: 1258332 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154928 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92784 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.011 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.011 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.012 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.012 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.013 15:01:56 -- setup/common.sh@33 -- # echo 1024 00:03:27.013 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.013 15:01:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.013 15:01:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.013 15:01:56 -- setup/hugepages.sh@27 -- # local node 00:03:27.013 15:01:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.013 15:01:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.013 15:01:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:27.013 15:01:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.013 15:01:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.013 15:01:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.013 15:01:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.013 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.013 15:01:56 -- setup/common.sh@18 -- # local node=0 00:03:27.013 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.013 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.013 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.013 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.013 15:01:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.013 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.013 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095548 kB' 'MemUsed: 4143572 kB' 'SwapCached: 0 kB' 'Active: 456544 kB' 'Inactive: 1258332 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1596648 kB' 'Mapped: 50772 kB' 'AnonPages: 119796 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62144 kB' 'Slab: 154928 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.013 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.013 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.014 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.014 15:01:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.014 15:01:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.014 node0=1024 expecting 1024 00:03:27.014 ************************************ 00:03:27.014 END TEST even_2G_alloc 00:03:27.014 ************************************ 00:03:27.014 15:01:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.014 15:01:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.014 15:01:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.014 00:03:27.014 real 0m0.573s 00:03:27.014 user 0m0.284s 00:03:27.014 sys 0m0.283s 00:03:27.014 15:01:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:27.014 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:03:27.014 15:01:56 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:27.014 15:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.014 15:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.014 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:03:27.014 ************************************ 00:03:27.014 START TEST odd_alloc 00:03:27.014 ************************************ 00:03:27.014 15:01:56 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:27.014 15:01:56 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:27.014 15:01:56 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:27.014 15:01:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:27.014 15:01:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.014 15:01:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.014 15:01:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.014 15:01:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:27.014 15:01:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:27.014 15:01:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.014 15:01:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.014 15:01:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:27.014 15:01:56 -- setup/hugepages.sh@83 -- # : 0 00:03:27.014 15:01:56 -- setup/hugepages.sh@84 -- # : 0 00:03:27.014 15:01:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.014 15:01:56 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:27.014 15:01:56 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:27.014 15:01:56 -- setup/hugepages.sh@160 -- # setup output 00:03:27.014 15:01:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.014 15:01:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:27.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:27.274 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:27.274 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:27.537 15:01:56 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:27.537 15:01:56 -- setup/hugepages.sh@89 -- # local node 00:03:27.537 15:01:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.537 15:01:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.537 15:01:56 -- setup/hugepages.sh@92 -- # local surp 00:03:27.537 15:01:56 -- setup/hugepages.sh@93 -- # local resv 00:03:27.537 15:01:56 -- setup/hugepages.sh@94 -- # local anon 00:03:27.537 15:01:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.537 15:01:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.537 15:01:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.537 15:01:56 -- setup/common.sh@18 -- # local node= 00:03:27.537 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.537 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.537 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.537 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.537 15:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.537 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.537 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8093160 kB' 'MemAvailable: 9473448 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456796 kB' 'Inactive: 1258332 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 50896 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154980 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92836 kB' 'KernelStack: 6464 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.537 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.537 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.538 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.538 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.538 15:01:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:27.538 15:01:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.538 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.538 15:01:56 -- setup/common.sh@18 -- # local node= 00:03:27.538 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.538 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.538 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.538 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.538 15:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.538 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.538 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8093160 kB' 'MemAvailable: 9473448 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456800 kB' 'Inactive: 1258332 kB' 'Active(anon): 128968 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154968 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92824 kB' 'KernelStack: 6416 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.538 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.538 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.539 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.539 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.540 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.540 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.540 15:01:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.540 15:01:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.540 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.540 15:01:56 -- setup/common.sh@18 -- # local node= 00:03:27.540 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.540 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.540 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.540 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.540 15:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.540 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.540 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8093160 kB' 'MemAvailable: 9473448 kB' 'Buffers: 3704 kB' 'Cached: 1592944 kB' 'SwapCached: 0 kB' 'Active: 456816 kB' 'Inactive: 1258332 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120096 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154956 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92812 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.540 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.540 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.541 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.541 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.541 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.541 15:01:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.541 15:01:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:27.541 nr_hugepages=1025 00:03:27.541 resv_hugepages=0 00:03:27.541 surplus_hugepages=0 00:03:27.541 anon_hugepages=0 00:03:27.541 15:01:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.541 15:01:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.541 15:01:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.541 15:01:56 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.541 15:01:56 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:27.541 15:01:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.541 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.541 15:01:56 -- setup/common.sh@18 -- # local node= 00:03:27.541 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.541 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.541 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.541 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.541 15:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.541 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.541 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.541 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8093416 kB' 'MemAvailable: 9473704 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456564 kB' 'Inactive: 1258332 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 154940 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92796 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.542 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.543 15:01:56 -- setup/common.sh@33 -- # echo 1025 00:03:27.543 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.543 15:01:56 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.543 15:01:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.543 15:01:56 -- setup/hugepages.sh@27 -- # local node 00:03:27.543 15:01:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.543 15:01:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:27.543 15:01:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:27.543 15:01:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.543 15:01:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.543 15:01:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.543 15:01:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.543 15:01:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.543 15:01:56 -- setup/common.sh@18 -- # local node=0 00:03:27.543 15:01:56 -- setup/common.sh@19 -- # local var val 00:03:27.543 15:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.543 15:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.543 15:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.543 15:01:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.543 15:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.543 15:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8093416 kB' 'MemUsed: 4145704 kB' 'SwapCached: 0 kB' 'Active: 456636 kB' 'Inactive: 1258332 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1596652 kB' 'Mapped: 50732 kB' 'AnonPages: 119968 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62144 kB' 'Slab: 154940 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.543 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # continue 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 15:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 15:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 15:01:56 -- setup/common.sh@33 -- # echo 0 00:03:27.544 15:01:56 -- setup/common.sh@33 -- # return 0 00:03:27.544 15:01:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.544 node0=1025 expecting 1025 00:03:27.544 15:01:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.544 15:01:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.545 15:01:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.545 15:01:56 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:27.545 15:01:56 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:27.545 00:03:27.545 real 0m0.555s 00:03:27.545 user 0m0.268s 00:03:27.545 sys 0m0.290s 00:03:27.545 15:01:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:27.545 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:03:27.545 ************************************ 00:03:27.545 END TEST odd_alloc 00:03:27.545 ************************************ 00:03:27.545 15:01:56 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:27.545 15:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.545 15:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.545 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:03:27.545 ************************************ 00:03:27.545 START TEST custom_alloc 00:03:27.545 ************************************ 00:03:27.545 15:01:56 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:27.545 15:01:56 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:27.545 15:01:56 -- setup/hugepages.sh@169 -- # local node 00:03:27.545 15:01:56 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:27.545 15:01:56 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:27.545 15:01:56 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:27.545 15:01:56 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:27.545 15:01:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.545 15:01:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.545 15:01:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.545 15:01:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.545 15:01:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.545 15:01:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.545 15:01:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:27.545 15:01:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.545 15:01:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.545 15:01:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.545 15:01:56 -- setup/hugepages.sh@83 -- # : 0 00:03:27.545 15:01:56 -- setup/hugepages.sh@84 -- # : 0 00:03:27.545 15:01:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:27.545 15:01:56 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.545 15:01:56 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.545 15:01:56 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:27.545 15:01:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.545 15:01:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.545 15:01:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.545 15:01:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:27.545 15:01:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.545 15:01:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.545 15:01:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:27.545 15:01:56 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.545 15:01:56 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.545 15:01:56 -- setup/hugepages.sh@78 -- # return 0 00:03:27.545 15:01:56 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:27.545 15:01:56 -- setup/hugepages.sh@187 -- # setup output 00:03:27.545 15:01:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.545 15:01:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:28.118 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.118 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.118 15:01:57 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:28.118 15:01:57 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.118 15:01:57 -- setup/hugepages.sh@89 -- # local node 00:03:28.118 15:01:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.118 15:01:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.118 15:01:57 -- setup/hugepages.sh@92 -- # local surp 00:03:28.118 15:01:57 -- setup/hugepages.sh@93 -- # local resv 00:03:28.118 15:01:57 -- setup/hugepages.sh@94 -- # local anon 00:03:28.118 15:01:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.118 15:01:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.118 15:01:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.118 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.118 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.118 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.118 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.118 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.118 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.118 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.118 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9138836 kB' 'MemAvailable: 10519128 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 457408 kB' 'Inactive: 1258336 kB' 'Active(anon): 129576 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120448 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155012 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92868 kB' 'KernelStack: 6504 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.118 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.118 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.119 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.119 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.120 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.120 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.120 15:01:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.120 15:01:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.120 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.120 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.120 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.120 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.120 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.120 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.120 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.120 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.120 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9138584 kB' 'MemAvailable: 10518876 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456724 kB' 'Inactive: 1258336 kB' 'Active(anon): 128892 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120044 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155028 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92884 kB' 'KernelStack: 6480 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.120 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.120 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.121 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.121 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.122 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.122 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.122 15:01:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.122 15:01:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.122 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.122 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.122 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.122 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.122 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.122 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.122 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.122 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.122 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9138584 kB' 'MemAvailable: 10518876 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456404 kB' 'Inactive: 1258336 kB' 'Active(anon): 128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155020 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92876 kB' 'KernelStack: 6448 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.122 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.122 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.123 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.123 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.123 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.124 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.124 15:01:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.124 15:01:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:28.124 nr_hugepages=512 00:03:28.124 15:01:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.124 resv_hugepages=0 00:03:28.124 surplus_hugepages=0 00:03:28.124 anon_hugepages=0 00:03:28.124 15:01:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.124 15:01:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.124 15:01:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:28.124 15:01:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:28.124 15:01:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.124 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.124 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.124 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.124 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.124 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.124 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.124 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.124 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.124 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9138584 kB' 'MemAvailable: 10518876 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456316 kB' 'Inactive: 1258336 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50732 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155020 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92876 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.124 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.124 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.125 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.125 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.126 15:01:57 -- setup/common.sh@33 -- # echo 512 00:03:28.126 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.126 15:01:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:28.126 15:01:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.126 15:01:57 -- setup/hugepages.sh@27 -- # local node 00:03:28.126 15:01:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.126 15:01:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.126 15:01:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:28.126 15:01:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.126 15:01:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.126 15:01:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.126 15:01:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.126 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.126 15:01:57 -- setup/common.sh@18 -- # local node=0 00:03:28.126 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.126 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.126 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.126 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.126 15:01:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.126 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.126 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9138584 kB' 'MemUsed: 3100536 kB' 'SwapCached: 0 kB' 'Active: 456540 kB' 'Inactive: 1258336 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1596652 kB' 'Mapped: 50732 kB' 'AnonPages: 119844 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62144 kB' 'Slab: 155004 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.126 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.126 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.127 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.127 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.127 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.127 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.127 15:01:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.127 15:01:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.127 15:01:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.127 15:01:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.127 node0=512 expecting 512 00:03:28.127 ************************************ 00:03:28.127 END TEST custom_alloc 00:03:28.127 ************************************ 00:03:28.127 15:01:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.127 15:01:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:28.127 00:03:28.127 real 0m0.542s 00:03:28.127 user 0m0.249s 00:03:28.127 sys 0m0.291s 00:03:28.127 15:01:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:28.127 15:01:57 -- common/autotest_common.sh@10 -- # set +x 00:03:28.127 15:01:57 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.127 15:01:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.127 15:01:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.127 15:01:57 -- common/autotest_common.sh@10 -- # set +x 00:03:28.127 ************************************ 00:03:28.127 START TEST no_shrink_alloc 00:03:28.127 ************************************ 00:03:28.127 15:01:57 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:28.127 15:01:57 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.127 15:01:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.127 15:01:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.127 15:01:57 -- setup/hugepages.sh@51 -- # shift 00:03:28.127 15:01:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.127 15:01:57 -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.127 15:01:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.127 15:01:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.127 15:01:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.127 15:01:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.127 15:01:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.127 15:01:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.127 15:01:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:28.127 15:01:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.127 15:01:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.127 15:01:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.127 15:01:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.127 15:01:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.127 15:01:57 -- setup/hugepages.sh@73 -- # return 0 00:03:28.127 15:01:57 -- setup/hugepages.sh@198 -- # setup output 00:03:28.127 15:01:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.127 15:01:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:28.700 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.700 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.700 15:01:57 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.700 15:01:57 -- setup/hugepages.sh@89 -- # local node 00:03:28.700 15:01:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.700 15:01:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.700 15:01:57 -- setup/hugepages.sh@92 -- # local surp 00:03:28.700 15:01:57 -- setup/hugepages.sh@93 -- # local resv 00:03:28.700 15:01:57 -- setup/hugepages.sh@94 -- # local anon 00:03:28.700 15:01:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.700 15:01:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.700 15:01:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.700 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.700 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.700 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.700 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.700 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.700 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.700 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.700 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095312 kB' 'MemAvailable: 9475604 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456844 kB' 'Inactive: 1258336 kB' 'Active(anon): 129012 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 50748 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155092 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92948 kB' 'KernelStack: 6440 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.701 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.701 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.702 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.702 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.702 15:01:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.702 15:01:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.702 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.702 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.702 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.702 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.702 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.702 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.702 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.702 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.702 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095312 kB' 'MemAvailable: 9475604 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 457072 kB' 'Inactive: 1258336 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120104 kB' 'Mapped: 50680 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155084 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92940 kB' 'KernelStack: 6420 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.702 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.703 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.703 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.704 15:01:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.704 15:01:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.704 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.704 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.704 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.704 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.704 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.704 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.704 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.704 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.704 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095312 kB' 'MemAvailable: 9475604 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456920 kB' 'Inactive: 1258336 kB' 'Active(anon): 129088 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120220 kB' 'Mapped: 50680 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155084 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92940 kB' 'KernelStack: 6404 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.704 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.704 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.705 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.705 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.705 15:01:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.705 nr_hugepages=1024 00:03:28.705 15:01:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.705 resv_hugepages=0 00:03:28.705 surplus_hugepages=0 00:03:28.705 anon_hugepages=0 00:03:28.705 15:01:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.705 15:01:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.705 15:01:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.705 15:01:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.705 15:01:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.705 15:01:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.705 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.705 15:01:57 -- setup/common.sh@18 -- # local node= 00:03:28.705 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.705 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.705 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.705 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.705 15:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.705 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.705 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095312 kB' 'MemAvailable: 9475604 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456908 kB' 'Inactive: 1258336 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120208 kB' 'Mapped: 50680 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155084 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92940 kB' 'KernelStack: 6404 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.705 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.705 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.706 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.706 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.707 15:01:57 -- setup/common.sh@33 -- # echo 1024 00:03:28.707 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.707 15:01:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.707 15:01:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.707 15:01:57 -- setup/hugepages.sh@27 -- # local node 00:03:28.707 15:01:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.707 15:01:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.707 15:01:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:28.707 15:01:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.707 15:01:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.707 15:01:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.707 15:01:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.707 15:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.707 15:01:57 -- setup/common.sh@18 -- # local node=0 00:03:28.707 15:01:57 -- setup/common.sh@19 -- # local var val 00:03:28.707 15:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.707 15:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.707 15:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.707 15:01:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.707 15:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.707 15:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095312 kB' 'MemUsed: 4143808 kB' 'SwapCached: 0 kB' 'Active: 456884 kB' 'Inactive: 1258336 kB' 'Active(anon): 129052 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1596652 kB' 'Mapped: 50680 kB' 'AnonPages: 120168 kB' 'Shmem: 10484 kB' 'KernelStack: 6388 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62144 kB' 'Slab: 155080 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.707 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.707 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # continue 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.708 15:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.708 15:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.708 15:01:57 -- setup/common.sh@33 -- # echo 0 00:03:28.708 15:01:57 -- setup/common.sh@33 -- # return 0 00:03:28.708 15:01:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.708 15:01:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.708 15:01:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.708 node0=1024 expecting 1024 00:03:28.708 15:01:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.708 15:01:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.708 15:01:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.708 15:01:57 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.708 15:01:57 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.708 15:01:57 -- setup/hugepages.sh@202 -- # setup output 00:03:28.708 15:01:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.708 15:01:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:29.230 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:29.230 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:29.230 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:29.230 15:01:58 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:29.230 15:01:58 -- setup/hugepages.sh@89 -- # local node 00:03:29.230 15:01:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.230 15:01:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.230 15:01:58 -- setup/hugepages.sh@92 -- # local surp 00:03:29.230 15:01:58 -- setup/hugepages.sh@93 -- # local resv 00:03:29.230 15:01:58 -- setup/hugepages.sh@94 -- # local anon 00:03:29.230 15:01:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.230 15:01:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.230 15:01:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.230 15:01:58 -- setup/common.sh@18 -- # local node= 00:03:29.230 15:01:58 -- setup/common.sh@19 -- # local var val 00:03:29.230 15:01:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.230 15:01:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.230 15:01:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.230 15:01:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.230 15:01:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.230 15:01:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.230 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 15:01:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8093596 kB' 'MemAvailable: 9473888 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 456872 kB' 'Inactive: 1258336 kB' 'Active(anon): 129040 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120116 kB' 'Mapped: 50720 kB' 'Shmem: 10484 kB' 'KReclaimable: 62144 kB' 'Slab: 155076 kB' 'SReclaimable: 62144 kB' 'SUnreclaim: 92932 kB' 'KernelStack: 6428 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:29.230 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 15:01:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.230 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.230 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.230 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.230 15:01:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.231 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.231 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.232 15:01:58 -- setup/common.sh@33 -- # echo 0 00:03:29.232 15:01:58 -- setup/common.sh@33 -- # return 0 00:03:29.232 15:01:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.232 15:01:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.232 15:01:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.232 15:01:58 -- setup/common.sh@18 -- # local node= 00:03:29.232 15:01:58 -- setup/common.sh@19 -- # local var val 00:03:29.232 15:01:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.232 15:01:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.232 15:01:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.232 15:01:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.232 15:01:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.232 15:01:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094956 kB' 'MemAvailable: 9475240 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 453856 kB' 'Inactive: 1258336 kB' 'Active(anon): 126024 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117112 kB' 'Mapped: 49884 kB' 'Shmem: 10484 kB' 'KReclaimable: 62128 kB' 'Slab: 154980 kB' 'SReclaimable: 62128 kB' 'SUnreclaim: 92852 kB' 'KernelStack: 6336 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.232 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.232 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.233 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.233 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.234 15:01:58 -- setup/common.sh@33 -- # echo 0 00:03:29.234 15:01:58 -- setup/common.sh@33 -- # return 0 00:03:29.234 15:01:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.234 15:01:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.234 15:01:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.234 15:01:58 -- setup/common.sh@18 -- # local node= 00:03:29.234 15:01:58 -- setup/common.sh@19 -- # local var val 00:03:29.234 15:01:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.234 15:01:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.234 15:01:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.234 15:01:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.234 15:01:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.234 15:01:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094956 kB' 'MemAvailable: 9475240 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 453892 kB' 'Inactive: 1258336 kB' 'Active(anon): 126060 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117196 kB' 'Mapped: 49884 kB' 'Shmem: 10484 kB' 'KReclaimable: 62128 kB' 'Slab: 154980 kB' 'SReclaimable: 62128 kB' 'SUnreclaim: 92852 kB' 'KernelStack: 6336 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.234 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.234 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.235 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.235 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.236 15:01:58 -- setup/common.sh@33 -- # echo 0 00:03:29.236 15:01:58 -- setup/common.sh@33 -- # return 0 00:03:29.236 nr_hugepages=1024 00:03:29.236 resv_hugepages=0 00:03:29.236 surplus_hugepages=0 00:03:29.236 anon_hugepages=0 00:03:29.236 15:01:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.236 15:01:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.236 15:01:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.236 15:01:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.236 15:01:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.236 15:01:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.236 15:01:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.236 15:01:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.236 15:01:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.236 15:01:58 -- setup/common.sh@18 -- # local node= 00:03:29.236 15:01:58 -- setup/common.sh@19 -- # local var val 00:03:29.236 15:01:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.236 15:01:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.236 15:01:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.236 15:01:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.236 15:01:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.236 15:01:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094956 kB' 'MemAvailable: 9475240 kB' 'Buffers: 3704 kB' 'Cached: 1592948 kB' 'SwapCached: 0 kB' 'Active: 453916 kB' 'Inactive: 1258336 kB' 'Active(anon): 126084 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117204 kB' 'Mapped: 49884 kB' 'Shmem: 10484 kB' 'KReclaimable: 62128 kB' 'Slab: 154980 kB' 'SReclaimable: 62128 kB' 'SUnreclaim: 92852 kB' 'KernelStack: 6336 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.236 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.236 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.237 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.237 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.238 15:01:58 -- setup/common.sh@33 -- # echo 1024 00:03:29.238 15:01:58 -- setup/common.sh@33 -- # return 0 00:03:29.238 15:01:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.238 15:01:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.238 15:01:58 -- setup/hugepages.sh@27 -- # local node 00:03:29.238 15:01:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.238 15:01:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.238 15:01:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:29.238 15:01:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.238 15:01:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.238 15:01:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.238 15:01:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.238 15:01:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.238 15:01:58 -- setup/common.sh@18 -- # local node=0 00:03:29.238 15:01:58 -- setup/common.sh@19 -- # local var val 00:03:29.238 15:01:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.238 15:01:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.238 15:01:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.238 15:01:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.238 15:01:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.238 15:01:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8095768 kB' 'MemUsed: 4143352 kB' 'SwapCached: 0 kB' 'Active: 453824 kB' 'Inactive: 1258336 kB' 'Active(anon): 125992 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1258336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1596652 kB' 'Mapped: 49884 kB' 'AnonPages: 117076 kB' 'Shmem: 10484 kB' 'KernelStack: 6320 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62128 kB' 'Slab: 154976 kB' 'SReclaimable: 62128 kB' 'SUnreclaim: 92848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.238 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.238 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # continue 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.239 15:01:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.239 15:01:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.239 15:01:58 -- setup/common.sh@33 -- # echo 0 00:03:29.239 15:01:58 -- setup/common.sh@33 -- # return 0 00:03:29.239 15:01:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.239 15:01:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.239 15:01:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.239 15:01:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.239 15:01:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.239 node0=1024 expecting 1024 00:03:29.239 15:01:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.239 ************************************ 00:03:29.239 END TEST no_shrink_alloc 00:03:29.239 ************************************ 00:03:29.239 00:03:29.239 real 0m1.101s 00:03:29.239 user 0m0.543s 00:03:29.239 sys 0m0.552s 00:03:29.239 15:01:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:29.239 15:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:29.498 15:01:58 -- setup/hugepages.sh@217 -- # clear_hp 00:03:29.498 15:01:58 -- setup/hugepages.sh@37 -- # local node hp 00:03:29.498 15:01:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.498 15:01:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.498 15:01:58 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.498 15:01:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.498 15:01:58 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.498 15:01:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.498 15:01:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.498 00:03:29.498 real 0m4.834s 00:03:29.498 user 0m2.335s 00:03:29.498 sys 0m2.440s 00:03:29.498 ************************************ 00:03:29.498 END TEST hugepages 00:03:29.498 ************************************ 00:03:29.498 15:01:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:29.498 15:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:29.498 15:01:58 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:29.498 15:01:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.498 15:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.498 15:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:29.498 ************************************ 00:03:29.498 START TEST driver 00:03:29.498 ************************************ 00:03:29.498 15:01:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:29.498 * Looking for test storage... 00:03:29.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.498 15:01:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:29.498 15:01:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:29.498 15:01:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:29.498 15:01:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:29.498 15:01:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:29.498 15:01:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:29.498 15:01:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:29.498 15:01:58 -- scripts/common.sh@335 -- # IFS=.-: 00:03:29.498 15:01:58 -- scripts/common.sh@335 -- # read -ra ver1 00:03:29.498 15:01:58 -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.498 15:01:58 -- scripts/common.sh@336 -- # read -ra ver2 00:03:29.498 15:01:58 -- scripts/common.sh@337 -- # local 'op=<' 00:03:29.498 15:01:58 -- scripts/common.sh@339 -- # ver1_l=2 00:03:29.498 15:01:58 -- scripts/common.sh@340 -- # ver2_l=1 00:03:29.498 15:01:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:29.498 15:01:58 -- scripts/common.sh@343 -- # case "$op" in 00:03:29.498 15:01:58 -- scripts/common.sh@344 -- # : 1 00:03:29.498 15:01:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:29.498 15:01:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.498 15:01:58 -- scripts/common.sh@364 -- # decimal 1 00:03:29.498 15:01:58 -- scripts/common.sh@352 -- # local d=1 00:03:29.498 15:01:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.498 15:01:58 -- scripts/common.sh@354 -- # echo 1 00:03:29.498 15:01:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:29.498 15:01:58 -- scripts/common.sh@365 -- # decimal 2 00:03:29.498 15:01:58 -- scripts/common.sh@352 -- # local d=2 00:03:29.498 15:01:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.498 15:01:58 -- scripts/common.sh@354 -- # echo 2 00:03:29.498 15:01:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:29.498 15:01:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:29.498 15:01:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:29.499 15:01:58 -- scripts/common.sh@367 -- # return 0 00:03:29.499 15:01:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.499 15:01:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.499 --rc genhtml_branch_coverage=1 00:03:29.499 --rc genhtml_function_coverage=1 00:03:29.499 --rc genhtml_legend=1 00:03:29.499 --rc geninfo_all_blocks=1 00:03:29.499 --rc geninfo_unexecuted_blocks=1 00:03:29.499 00:03:29.499 ' 00:03:29.499 15:01:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.499 --rc genhtml_branch_coverage=1 00:03:29.499 --rc genhtml_function_coverage=1 00:03:29.499 --rc genhtml_legend=1 00:03:29.499 --rc geninfo_all_blocks=1 00:03:29.499 --rc geninfo_unexecuted_blocks=1 00:03:29.499 00:03:29.499 ' 00:03:29.499 15:01:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.499 --rc genhtml_branch_coverage=1 00:03:29.499 --rc genhtml_function_coverage=1 00:03:29.499 --rc genhtml_legend=1 00:03:29.499 --rc geninfo_all_blocks=1 00:03:29.499 --rc geninfo_unexecuted_blocks=1 00:03:29.499 00:03:29.499 ' 00:03:29.499 15:01:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.499 --rc genhtml_branch_coverage=1 00:03:29.499 --rc genhtml_function_coverage=1 00:03:29.499 --rc genhtml_legend=1 00:03:29.499 --rc geninfo_all_blocks=1 00:03:29.499 --rc geninfo_unexecuted_blocks=1 00:03:29.499 00:03:29.499 ' 00:03:29.499 15:01:58 -- setup/driver.sh@68 -- # setup reset 00:03:29.499 15:01:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.499 15:01:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.067 15:01:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:30.067 15:01:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.067 15:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.067 15:01:59 -- common/autotest_common.sh@10 -- # set +x 00:03:30.067 ************************************ 00:03:30.067 START TEST guess_driver 00:03:30.067 ************************************ 00:03:30.067 15:01:59 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:30.067 15:01:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:30.067 15:01:59 -- setup/driver.sh@47 -- # local fail=0 00:03:30.067 15:01:59 -- setup/driver.sh@49 -- # pick_driver 00:03:30.067 15:01:59 -- setup/driver.sh@36 -- # vfio 00:03:30.067 15:01:59 -- setup/driver.sh@21 -- # local iommu_grups 00:03:30.067 15:01:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:30.067 15:01:59 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:30.068 15:01:59 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:30.068 15:01:59 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:30.068 15:01:59 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:30.068 15:01:59 -- setup/driver.sh@32 -- # return 1 00:03:30.068 15:01:59 -- setup/driver.sh@38 -- # uio 00:03:30.068 15:01:59 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:30.068 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:30.068 15:01:59 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:30.068 Looking for driver=uio_pci_generic 00:03:30.068 15:01:59 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:30.068 15:01:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.068 15:01:59 -- setup/driver.sh@45 -- # setup output config 00:03:30.068 15:01:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.068 15:01:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.004 15:01:59 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:31.004 15:01:59 -- setup/driver.sh@58 -- # continue 00:03:31.004 15:01:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.004 15:02:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.004 15:02:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:31.004 15:02:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.004 15:02:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.004 15:02:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:31.004 15:02:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.004 15:02:00 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:31.004 15:02:00 -- setup/driver.sh@65 -- # setup reset 00:03:31.004 15:02:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.004 15:02:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:31.572 ************************************ 00:03:31.572 END TEST guess_driver 00:03:31.572 ************************************ 00:03:31.572 00:03:31.572 real 0m1.412s 00:03:31.572 user 0m0.546s 00:03:31.572 sys 0m0.878s 00:03:31.572 15:02:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:31.572 15:02:00 -- common/autotest_common.sh@10 -- # set +x 00:03:31.572 ************************************ 00:03:31.572 END TEST driver 00:03:31.572 ************************************ 00:03:31.572 00:03:31.572 real 0m2.193s 00:03:31.572 user 0m0.860s 00:03:31.572 sys 0m1.403s 00:03:31.572 15:02:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:31.572 15:02:00 -- common/autotest_common.sh@10 -- # set +x 00:03:31.572 15:02:00 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:31.572 15:02:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.572 15:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.572 15:02:00 -- common/autotest_common.sh@10 -- # set +x 00:03:31.572 ************************************ 00:03:31.572 START TEST devices 00:03:31.572 ************************************ 00:03:31.572 15:02:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:31.832 * Looking for test storage... 00:03:31.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:31.832 15:02:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:31.832 15:02:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:31.832 15:02:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:31.832 15:02:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:31.832 15:02:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:31.832 15:02:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:31.832 15:02:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:31.832 15:02:00 -- scripts/common.sh@335 -- # IFS=.-: 00:03:31.832 15:02:00 -- scripts/common.sh@335 -- # read -ra ver1 00:03:31.832 15:02:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.832 15:02:00 -- scripts/common.sh@336 -- # read -ra ver2 00:03:31.832 15:02:00 -- scripts/common.sh@337 -- # local 'op=<' 00:03:31.832 15:02:00 -- scripts/common.sh@339 -- # ver1_l=2 00:03:31.832 15:02:00 -- scripts/common.sh@340 -- # ver2_l=1 00:03:31.832 15:02:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:31.832 15:02:00 -- scripts/common.sh@343 -- # case "$op" in 00:03:31.832 15:02:00 -- scripts/common.sh@344 -- # : 1 00:03:31.832 15:02:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:31.832 15:02:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.832 15:02:00 -- scripts/common.sh@364 -- # decimal 1 00:03:31.832 15:02:00 -- scripts/common.sh@352 -- # local d=1 00:03:31.832 15:02:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.832 15:02:00 -- scripts/common.sh@354 -- # echo 1 00:03:31.832 15:02:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:31.832 15:02:00 -- scripts/common.sh@365 -- # decimal 2 00:03:31.832 15:02:00 -- scripts/common.sh@352 -- # local d=2 00:03:31.832 15:02:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.832 15:02:00 -- scripts/common.sh@354 -- # echo 2 00:03:31.832 15:02:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:31.832 15:02:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:31.832 15:02:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:31.832 15:02:00 -- scripts/common.sh@367 -- # return 0 00:03:31.832 15:02:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.832 15:02:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.832 --rc genhtml_branch_coverage=1 00:03:31.832 --rc genhtml_function_coverage=1 00:03:31.832 --rc genhtml_legend=1 00:03:31.832 --rc geninfo_all_blocks=1 00:03:31.832 --rc geninfo_unexecuted_blocks=1 00:03:31.832 00:03:31.832 ' 00:03:31.832 15:02:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.832 --rc genhtml_branch_coverage=1 00:03:31.832 --rc genhtml_function_coverage=1 00:03:31.832 --rc genhtml_legend=1 00:03:31.832 --rc geninfo_all_blocks=1 00:03:31.832 --rc geninfo_unexecuted_blocks=1 00:03:31.832 00:03:31.832 ' 00:03:31.832 15:02:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.832 --rc genhtml_branch_coverage=1 00:03:31.832 --rc genhtml_function_coverage=1 00:03:31.832 --rc genhtml_legend=1 00:03:31.832 --rc geninfo_all_blocks=1 00:03:31.832 --rc geninfo_unexecuted_blocks=1 00:03:31.832 00:03:31.832 ' 00:03:31.832 15:02:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.832 --rc genhtml_branch_coverage=1 00:03:31.832 --rc genhtml_function_coverage=1 00:03:31.832 --rc genhtml_legend=1 00:03:31.832 --rc geninfo_all_blocks=1 00:03:31.832 --rc geninfo_unexecuted_blocks=1 00:03:31.832 00:03:31.832 ' 00:03:31.832 15:02:00 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:31.832 15:02:00 -- setup/devices.sh@192 -- # setup reset 00:03:31.832 15:02:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.832 15:02:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.768 15:02:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:32.768 15:02:01 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:32.768 15:02:01 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:32.768 15:02:01 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:32.768 15:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:32.768 15:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:32.768 15:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:32.768 15:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:32.768 15:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:32.768 15:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:32.768 15:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:32.768 15:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:32.768 15:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:32.768 15:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:32.768 15:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:32.768 15:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:32.768 15:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:32.768 15:02:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:32.768 15:02:01 -- setup/devices.sh@196 -- # blocks=() 00:03:32.768 15:02:01 -- setup/devices.sh@196 -- # declare -a blocks 00:03:32.768 15:02:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:32.768 15:02:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:32.768 15:02:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:32.768 15:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:32.768 15:02:01 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:32.768 15:02:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:32.768 15:02:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:32.768 15:02:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:32.768 No valid GPT data, bailing 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # pt= 00:03:32.768 15:02:01 -- scripts/common.sh@394 -- # return 1 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:32.768 15:02:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:32.768 15:02:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:32.768 15:02:01 -- setup/common.sh@80 -- # echo 5368709120 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:32.768 15:02:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.768 15:02:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:32.768 15:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:32.768 15:02:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:32.768 15:02:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:32.768 15:02:01 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:32.768 15:02:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:32.768 No valid GPT data, bailing 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # pt= 00:03:32.768 15:02:01 -- scripts/common.sh@394 -- # return 1 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:32.768 15:02:01 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:32.768 15:02:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:32.768 15:02:01 -- setup/common.sh@80 -- # echo 4294967296 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:32.768 15:02:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.768 15:02:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:32.768 15:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:32.768 15:02:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:32.768 15:02:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:32.768 15:02:01 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:32.768 15:02:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:32.768 No valid GPT data, bailing 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # pt= 00:03:32.768 15:02:01 -- scripts/common.sh@394 -- # return 1 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:32.768 15:02:01 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:32.768 15:02:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:32.768 15:02:01 -- setup/common.sh@80 -- # echo 4294967296 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:32.768 15:02:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.768 15:02:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:32.768 15:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:32.768 15:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:32.768 15:02:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:32.768 15:02:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:32.768 15:02:01 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:32.768 15:02:01 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:32.768 15:02:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:32.768 No valid GPT data, bailing 00:03:32.768 15:02:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:32.768 15:02:02 -- scripts/common.sh@393 -- # pt= 00:03:32.768 15:02:02 -- scripts/common.sh@394 -- # return 1 00:03:32.768 15:02:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:32.768 15:02:02 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:32.769 15:02:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:32.769 15:02:02 -- setup/common.sh@80 -- # echo 4294967296 00:03:32.769 15:02:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:32.769 15:02:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.769 15:02:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:32.769 15:02:02 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:32.769 15:02:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:32.769 15:02:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:32.769 15:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.769 15:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.769 15:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:32.769 ************************************ 00:03:32.769 START TEST nvme_mount 00:03:32.769 ************************************ 00:03:32.769 15:02:02 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:32.769 15:02:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:32.769 15:02:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:32.769 15:02:02 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.769 15:02:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.769 15:02:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:32.769 15:02:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.769 15:02:02 -- setup/common.sh@40 -- # local part_no=1 00:03:32.769 15:02:02 -- setup/common.sh@41 -- # local size=1073741824 00:03:32.769 15:02:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.769 15:02:02 -- setup/common.sh@44 -- # parts=() 00:03:32.769 15:02:02 -- setup/common.sh@44 -- # local parts 00:03:32.769 15:02:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.769 15:02:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.769 15:02:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.769 15:02:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.769 15:02:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.769 15:02:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:32.769 15:02:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.769 15:02:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:34.149 Creating new GPT entries in memory. 00:03:34.149 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:34.150 other utilities. 00:03:34.150 15:02:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:34.150 15:02:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.150 15:02:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.150 15:02:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.150 15:02:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:35.086 Creating new GPT entries in memory. 00:03:35.086 The operation has completed successfully. 00:03:35.086 15:02:04 -- setup/common.sh@57 -- # (( part++ )) 00:03:35.086 15:02:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.086 15:02:04 -- setup/common.sh@62 -- # wait 52117 00:03:35.086 15:02:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.086 15:02:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:35.086 15:02:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.086 15:02:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:35.087 15:02:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:35.087 15:02:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.087 15:02:04 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.087 15:02:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:35.087 15:02:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:35.087 15:02:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.087 15:02:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.087 15:02:04 -- setup/devices.sh@53 -- # local found=0 00:03:35.087 15:02:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.087 15:02:04 -- setup/devices.sh@56 -- # : 00:03:35.087 15:02:04 -- setup/devices.sh@59 -- # local pci status 00:03:35.087 15:02:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.087 15:02:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:35.087 15:02:04 -- setup/devices.sh@47 -- # setup output config 00:03:35.087 15:02:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.087 15:02:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:35.087 15:02:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:35.087 15:02:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:35.087 15:02:04 -- setup/devices.sh@63 -- # found=1 00:03:35.087 15:02:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.087 15:02:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:35.087 15:02:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.655 15:02:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:35.655 15:02:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.655 15:02:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:35.655 15:02:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.655 15:02:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.655 15:02:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:35.655 15:02:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.655 15:02:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.655 15:02:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.655 15:02:04 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:35.655 15:02:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.655 15:02:04 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.655 15:02:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.655 15:02:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:35.655 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:35.655 15:02:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:35.655 15:02:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:35.914 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:35.914 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:35.914 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:35.914 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:35.914 15:02:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:35.914 15:02:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:35.914 15:02:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.914 15:02:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:35.914 15:02:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:35.914 15:02:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.914 15:02:05 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.914 15:02:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:35.914 15:02:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:35.914 15:02:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.914 15:02:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.914 15:02:05 -- setup/devices.sh@53 -- # local found=0 00:03:35.914 15:02:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.914 15:02:05 -- setup/devices.sh@56 -- # : 00:03:35.914 15:02:05 -- setup/devices.sh@59 -- # local pci status 00:03:35.914 15:02:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.915 15:02:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:35.915 15:02:05 -- setup/devices.sh@47 -- # setup output config 00:03:35.915 15:02:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.915 15:02:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:36.174 15:02:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.174 15:02:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:36.174 15:02:05 -- setup/devices.sh@63 -- # found=1 00:03:36.174 15:02:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.174 15:02:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.174 15:02:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.433 15:02:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.433 15:02:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.692 15:02:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.692 15:02:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.692 15:02:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.692 15:02:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:36.692 15:02:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:36.692 15:02:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.692 15:02:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:36.692 15:02:05 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:36.692 15:02:05 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:36.692 15:02:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:36.692 15:02:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:36.692 15:02:05 -- setup/devices.sh@50 -- # local mount_point= 00:03:36.692 15:02:05 -- setup/devices.sh@51 -- # local test_file= 00:03:36.692 15:02:05 -- setup/devices.sh@53 -- # local found=0 00:03:36.692 15:02:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.692 15:02:05 -- setup/devices.sh@59 -- # local pci status 00:03:36.692 15:02:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.692 15:02:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:36.692 15:02:05 -- setup/devices.sh@47 -- # setup output config 00:03:36.693 15:02:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.693 15:02:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:36.952 15:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.952 15:02:06 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:36.952 15:02:06 -- setup/devices.sh@63 -- # found=1 00:03:36.952 15:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.952 15:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.952 15:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.212 15:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.212 15:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.212 15:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.212 15:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.471 15:02:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.471 15:02:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:37.471 15:02:06 -- setup/devices.sh@68 -- # return 0 00:03:37.471 15:02:06 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:37.471 15:02:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:37.471 15:02:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.471 15:02:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.471 15:02:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.471 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.471 00:03:37.471 real 0m4.487s 00:03:37.471 user 0m1.044s 00:03:37.471 sys 0m1.089s 00:03:37.471 15:02:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.471 ************************************ 00:03:37.471 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:37.471 END TEST nvme_mount 00:03:37.471 ************************************ 00:03:37.471 15:02:06 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:37.471 15:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.471 15:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.471 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:37.471 ************************************ 00:03:37.471 START TEST dm_mount 00:03:37.471 ************************************ 00:03:37.471 15:02:06 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:37.471 15:02:06 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:37.471 15:02:06 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:37.471 15:02:06 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:37.471 15:02:06 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:37.471 15:02:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:37.471 15:02:06 -- setup/common.sh@40 -- # local part_no=2 00:03:37.471 15:02:06 -- setup/common.sh@41 -- # local size=1073741824 00:03:37.471 15:02:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.471 15:02:06 -- setup/common.sh@44 -- # parts=() 00:03:37.471 15:02:06 -- setup/common.sh@44 -- # local parts 00:03:37.471 15:02:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.471 15:02:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.471 15:02:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.471 15:02:06 -- setup/common.sh@46 -- # (( part++ )) 00:03:37.471 15:02:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.471 15:02:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.471 15:02:06 -- setup/common.sh@46 -- # (( part++ )) 00:03:37.471 15:02:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.471 15:02:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:37.471 15:02:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:37.471 15:02:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:38.410 Creating new GPT entries in memory. 00:03:38.410 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:38.410 other utilities. 00:03:38.410 15:02:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:38.410 15:02:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.410 15:02:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.410 15:02:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.410 15:02:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:39.348 Creating new GPT entries in memory. 00:03:39.348 The operation has completed successfully. 00:03:39.348 15:02:08 -- setup/common.sh@57 -- # (( part++ )) 00:03:39.348 15:02:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.348 15:02:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.348 15:02:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.348 15:02:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:40.726 The operation has completed successfully. 00:03:40.726 15:02:09 -- setup/common.sh@57 -- # (( part++ )) 00:03:40.726 15:02:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.726 15:02:09 -- setup/common.sh@62 -- # wait 52577 00:03:40.726 15:02:09 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:40.726 15:02:09 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.726 15:02:09 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:40.726 15:02:09 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:40.726 15:02:09 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:40.726 15:02:09 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.726 15:02:09 -- setup/devices.sh@161 -- # break 00:03:40.726 15:02:09 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.726 15:02:09 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:40.726 15:02:09 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:40.726 15:02:09 -- setup/devices.sh@166 -- # dm=dm-0 00:03:40.726 15:02:09 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:40.726 15:02:09 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:40.726 15:02:09 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.726 15:02:09 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:40.726 15:02:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.726 15:02:09 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.726 15:02:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:40.726 15:02:09 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.726 15:02:09 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:40.726 15:02:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:40.726 15:02:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:40.726 15:02:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.726 15:02:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:40.726 15:02:09 -- setup/devices.sh@53 -- # local found=0 00:03:40.726 15:02:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:40.726 15:02:09 -- setup/devices.sh@56 -- # : 00:03:40.726 15:02:09 -- setup/devices.sh@59 -- # local pci status 00:03:40.726 15:02:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.726 15:02:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:40.726 15:02:09 -- setup/devices.sh@47 -- # setup output config 00:03:40.726 15:02:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.726 15:02:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.726 15:02:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:40.726 15:02:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:40.726 15:02:09 -- setup/devices.sh@63 -- # found=1 00:03:40.726 15:02:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.726 15:02:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:40.726 15:02:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.985 15:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:40.985 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.244 15:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:41.244 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.244 15:02:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.244 15:02:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:41.244 15:02:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:41.244 15:02:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:41.244 15:02:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:41.244 15:02:10 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:41.244 15:02:10 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:41.244 15:02:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:41.244 15:02:10 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:41.244 15:02:10 -- setup/devices.sh@50 -- # local mount_point= 00:03:41.244 15:02:10 -- setup/devices.sh@51 -- # local test_file= 00:03:41.244 15:02:10 -- setup/devices.sh@53 -- # local found=0 00:03:41.244 15:02:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:41.244 15:02:10 -- setup/devices.sh@59 -- # local pci status 00:03:41.244 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.244 15:02:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:41.244 15:02:10 -- setup/devices.sh@47 -- # setup output config 00:03:41.245 15:02:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.245 15:02:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.504 15:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:41.504 15:02:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:41.504 15:02:10 -- setup/devices.sh@63 -- # found=1 00:03:41.504 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.504 15:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:41.504 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.764 15:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:41.764 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.764 15:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:41.764 15:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.023 15:02:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.023 15:02:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:42.023 15:02:11 -- setup/devices.sh@68 -- # return 0 00:03:42.023 15:02:11 -- setup/devices.sh@187 -- # cleanup_dm 00:03:42.023 15:02:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:42.023 15:02:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:42.023 15:02:11 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:42.023 15:02:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.023 15:02:11 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:42.023 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.023 15:02:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:42.023 15:02:11 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:42.023 00:03:42.023 real 0m4.565s 00:03:42.023 user 0m0.683s 00:03:42.023 sys 0m0.817s 00:03:42.023 15:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:42.023 ************************************ 00:03:42.023 END TEST dm_mount 00:03:42.023 ************************************ 00:03:42.023 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 15:02:11 -- setup/devices.sh@1 -- # cleanup 00:03:42.023 15:02:11 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:42.023 15:02:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.023 15:02:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.023 15:02:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:42.023 15:02:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.023 15:02:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:42.282 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:42.282 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:42.283 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:42.283 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:42.283 15:02:11 -- setup/devices.sh@12 -- # cleanup_dm 00:03:42.283 15:02:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:42.283 15:02:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:42.283 15:02:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.283 15:02:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:42.283 15:02:11 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.283 15:02:11 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:42.283 ************************************ 00:03:42.283 END TEST devices 00:03:42.283 ************************************ 00:03:42.283 00:03:42.283 real 0m10.644s 00:03:42.283 user 0m2.452s 00:03:42.283 sys 0m2.493s 00:03:42.283 15:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:42.283 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:03:42.283 00:03:42.283 real 0m22.304s 00:03:42.283 user 0m7.683s 00:03:42.283 sys 0m8.908s 00:03:42.283 15:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:42.283 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:03:42.283 ************************************ 00:03:42.283 END TEST setup.sh 00:03:42.283 ************************************ 00:03:42.283 15:02:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:42.542 Hugepages 00:03:42.542 node hugesize free / total 00:03:42.542 node0 1048576kB 0 / 0 00:03:42.542 node0 2048kB 2048 / 2048 00:03:42.542 00:03:42.542 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.542 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:42.800 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:42.800 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:42.800 15:02:11 -- spdk/autotest.sh@128 -- # uname -s 00:03:42.800 15:02:11 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:03:42.800 15:02:11 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:03:42.800 15:02:11 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.369 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.628 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.628 15:02:12 -- common/autotest_common.sh@1527 -- # sleep 1 00:03:44.567 15:02:13 -- common/autotest_common.sh@1528 -- # bdfs=() 00:03:44.567 15:02:13 -- common/autotest_common.sh@1528 -- # local bdfs 00:03:44.567 15:02:13 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:03:44.567 15:02:13 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:03:44.567 15:02:13 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:44.567 15:02:13 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:44.567 15:02:13 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:44.567 15:02:13 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:44.567 15:02:13 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:44.567 15:02:13 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:03:44.567 15:02:13 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:44.567 15:02:13 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.135 Waiting for block devices as requested 00:03:45.135 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:45.135 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:45.135 15:02:14 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:45.135 15:02:14 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:03:45.395 15:02:14 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:45.395 15:02:14 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:03:45.395 15:02:14 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:03:45.395 15:02:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:45.395 15:02:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:45.395 15:02:14 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:45.395 15:02:14 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:45.395 15:02:14 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:45.395 15:02:14 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:45.395 15:02:14 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:45.395 15:02:14 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:45.395 15:02:14 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:45.395 15:02:14 -- common/autotest_common.sh@1552 -- # continue 00:03:45.395 15:02:14 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:45.395 15:02:14 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:45.395 15:02:14 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:45.395 15:02:14 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:03:45.395 15:02:14 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:45.395 15:02:14 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:03:45.395 15:02:14 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:45.395 15:02:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:03:45.395 15:02:14 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:03:45.396 15:02:14 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:03:45.396 15:02:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:45.396 15:02:14 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:45.396 15:02:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:45.396 15:02:14 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:45.396 15:02:14 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:45.396 15:02:14 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:45.396 15:02:14 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:03:45.396 15:02:14 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:45.396 15:02:14 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:45.396 15:02:14 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:45.396 15:02:14 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:45.396 15:02:14 -- common/autotest_common.sh@1552 -- # continue 00:03:45.396 15:02:14 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:03:45.396 15:02:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:45.396 15:02:14 -- common/autotest_common.sh@10 -- # set +x 00:03:45.396 15:02:14 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:03:45.396 15:02:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.396 15:02:14 -- common/autotest_common.sh@10 -- # set +x 00:03:45.396 15:02:14 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.225 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.225 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.225 15:02:15 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:03:46.225 15:02:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:46.225 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:46.225 15:02:15 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:03:46.225 15:02:15 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:03:46.225 15:02:15 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:03:46.225 15:02:15 -- common/autotest_common.sh@1572 -- # bdfs=() 00:03:46.225 15:02:15 -- common/autotest_common.sh@1572 -- # local bdfs 00:03:46.225 15:02:15 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:03:46.225 15:02:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:46.225 15:02:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:46.225 15:02:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.225 15:02:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:46.225 15:02:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:46.225 15:02:15 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:03:46.225 15:02:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:46.225 15:02:15 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:46.225 15:02:15 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:46.225 15:02:15 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:46.225 15:02:15 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:46.225 15:02:15 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:46.225 15:02:15 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:46.225 15:02:15 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:46.225 15:02:15 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:46.225 15:02:15 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:03:46.225 15:02:15 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:03:46.225 15:02:15 -- common/autotest_common.sh@1588 -- # return 0 00:03:46.225 15:02:15 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:03:46.225 15:02:15 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:03:46.225 15:02:15 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:46.225 15:02:15 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:46.225 15:02:15 -- spdk/autotest.sh@160 -- # timing_enter lib 00:03:46.225 15:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:46.225 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:46.225 15:02:15 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:46.225 15:02:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.225 15:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.225 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:46.225 ************************************ 00:03:46.225 START TEST env 00:03:46.225 ************************************ 00:03:46.225 15:02:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:46.484 * Looking for test storage... 00:03:46.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:46.484 15:02:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:46.484 15:02:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:46.484 15:02:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:46.484 15:02:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:46.484 15:02:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:46.484 15:02:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:46.484 15:02:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:46.484 15:02:15 -- scripts/common.sh@335 -- # IFS=.-: 00:03:46.484 15:02:15 -- scripts/common.sh@335 -- # read -ra ver1 00:03:46.485 15:02:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.485 15:02:15 -- scripts/common.sh@336 -- # read -ra ver2 00:03:46.485 15:02:15 -- scripts/common.sh@337 -- # local 'op=<' 00:03:46.485 15:02:15 -- scripts/common.sh@339 -- # ver1_l=2 00:03:46.485 15:02:15 -- scripts/common.sh@340 -- # ver2_l=1 00:03:46.485 15:02:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:46.485 15:02:15 -- scripts/common.sh@343 -- # case "$op" in 00:03:46.485 15:02:15 -- scripts/common.sh@344 -- # : 1 00:03:46.485 15:02:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:46.485 15:02:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.485 15:02:15 -- scripts/common.sh@364 -- # decimal 1 00:03:46.485 15:02:15 -- scripts/common.sh@352 -- # local d=1 00:03:46.485 15:02:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.485 15:02:15 -- scripts/common.sh@354 -- # echo 1 00:03:46.485 15:02:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:46.485 15:02:15 -- scripts/common.sh@365 -- # decimal 2 00:03:46.485 15:02:15 -- scripts/common.sh@352 -- # local d=2 00:03:46.485 15:02:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.485 15:02:15 -- scripts/common.sh@354 -- # echo 2 00:03:46.485 15:02:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:46.485 15:02:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:46.485 15:02:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:46.485 15:02:15 -- scripts/common.sh@367 -- # return 0 00:03:46.485 15:02:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.485 15:02:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:46.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.485 --rc genhtml_branch_coverage=1 00:03:46.485 --rc genhtml_function_coverage=1 00:03:46.485 --rc genhtml_legend=1 00:03:46.485 --rc geninfo_all_blocks=1 00:03:46.485 --rc geninfo_unexecuted_blocks=1 00:03:46.485 00:03:46.485 ' 00:03:46.485 15:02:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:46.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.485 --rc genhtml_branch_coverage=1 00:03:46.485 --rc genhtml_function_coverage=1 00:03:46.485 --rc genhtml_legend=1 00:03:46.485 --rc geninfo_all_blocks=1 00:03:46.485 --rc geninfo_unexecuted_blocks=1 00:03:46.485 00:03:46.485 ' 00:03:46.485 15:02:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:46.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.485 --rc genhtml_branch_coverage=1 00:03:46.485 --rc genhtml_function_coverage=1 00:03:46.485 --rc genhtml_legend=1 00:03:46.485 --rc geninfo_all_blocks=1 00:03:46.485 --rc geninfo_unexecuted_blocks=1 00:03:46.485 00:03:46.485 ' 00:03:46.485 15:02:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:46.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.485 --rc genhtml_branch_coverage=1 00:03:46.485 --rc genhtml_function_coverage=1 00:03:46.485 --rc genhtml_legend=1 00:03:46.485 --rc geninfo_all_blocks=1 00:03:46.485 --rc geninfo_unexecuted_blocks=1 00:03:46.485 00:03:46.485 ' 00:03:46.485 15:02:15 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:46.485 15:02:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.485 15:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.485 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:46.485 ************************************ 00:03:46.485 START TEST env_memory 00:03:46.485 ************************************ 00:03:46.485 15:02:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:46.485 00:03:46.485 00:03:46.485 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.485 http://cunit.sourceforge.net/ 00:03:46.485 00:03:46.485 00:03:46.485 Suite: memory 00:03:46.485 Test: alloc and free memory map ...[2024-11-06 15:02:15.719896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:46.485 passed 00:03:46.485 Test: mem map translation ...[2024-11-06 15:02:15.750863] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:46.485 [2024-11-06 15:02:15.750937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:46.485 [2024-11-06 15:02:15.750994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:46.485 [2024-11-06 15:02:15.751005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:46.745 passed 00:03:46.745 Test: mem map registration ...[2024-11-06 15:02:15.815954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:46.745 [2024-11-06 15:02:15.816023] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:46.745 passed 00:03:46.745 Test: mem map adjacent registrations ...passed 00:03:46.745 00:03:46.745 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.745 suites 1 1 n/a 0 0 00:03:46.745 tests 4 4 4 0 0 00:03:46.745 asserts 152 152 152 0 n/a 00:03:46.745 00:03:46.745 Elapsed time = 0.216 seconds 00:03:46.745 00:03:46.745 real 0m0.235s 00:03:46.745 user 0m0.216s 00:03:46.745 sys 0m0.015s 00:03:46.745 15:02:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.745 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:46.745 ************************************ 00:03:46.745 END TEST env_memory 00:03:46.745 ************************************ 00:03:46.745 15:02:15 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:46.745 15:02:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.745 15:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.745 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:46.745 ************************************ 00:03:46.745 START TEST env_vtophys 00:03:46.745 ************************************ 00:03:46.745 15:02:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:46.745 EAL: lib.eal log level changed from notice to debug 00:03:46.745 EAL: Detected lcore 0 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 1 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 2 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 3 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 4 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 5 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 6 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 7 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 8 as core 0 on socket 0 00:03:46.745 EAL: Detected lcore 9 as core 0 on socket 0 00:03:46.745 EAL: Maximum logical cores by configuration: 128 00:03:46.746 EAL: Detected CPU lcores: 10 00:03:46.746 EAL: Detected NUMA nodes: 1 00:03:46.746 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:46.746 EAL: Detected shared linkage of DPDK 00:03:46.746 EAL: No shared files mode enabled, IPC will be disabled 00:03:46.746 EAL: Selected IOVA mode 'PA' 00:03:46.746 EAL: Probing VFIO support... 00:03:46.746 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:46.746 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:46.746 EAL: Ask a virtual area of 0x2e000 bytes 00:03:46.746 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:46.746 EAL: Setting up physically contiguous memory... 00:03:46.746 EAL: Setting maximum number of open files to 524288 00:03:46.746 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:46.746 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:46.746 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.746 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:46.746 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.746 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.746 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:46.746 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:46.746 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.746 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:46.746 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.746 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.746 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:46.746 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:46.746 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.746 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:46.746 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.746 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.746 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:46.746 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:46.746 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.746 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:46.746 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.746 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.746 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:46.746 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:46.746 EAL: Hugepages will be freed exactly as allocated. 00:03:46.746 EAL: No shared files mode enabled, IPC is disabled 00:03:46.746 EAL: No shared files mode enabled, IPC is disabled 00:03:47.006 EAL: TSC frequency is ~2200000 KHz 00:03:47.006 EAL: Main lcore 0 is ready (tid=7f787183aa00;cpuset=[0]) 00:03:47.006 EAL: Trying to obtain current memory policy. 00:03:47.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.006 EAL: Restoring previous memory policy: 0 00:03:47.006 EAL: request: mp_malloc_sync 00:03:47.006 EAL: No shared files mode enabled, IPC is disabled 00:03:47.006 EAL: Heap on socket 0 was expanded by 2MB 00:03:47.006 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:47.006 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:47.006 EAL: Mem event callback 'spdk:(nil)' registered 00:03:47.006 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:47.006 00:03:47.007 00:03:47.007 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.007 http://cunit.sourceforge.net/ 00:03:47.007 00:03:47.007 00:03:47.007 Suite: components_suite 00:03:47.007 Test: vtophys_malloc_test ...passed 00:03:47.007 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 4MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 4MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 6MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 6MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 10MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 10MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 18MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 18MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 34MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 34MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 66MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 66MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 130MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was shrunk by 130MB 00:03:47.007 EAL: Trying to obtain current memory policy. 00:03:47.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.007 EAL: Restoring previous memory policy: 4 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.007 EAL: request: mp_malloc_sync 00:03:47.007 EAL: No shared files mode enabled, IPC is disabled 00:03:47.007 EAL: Heap on socket 0 was expanded by 258MB 00:03:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.267 EAL: request: mp_malloc_sync 00:03:47.267 EAL: No shared files mode enabled, IPC is disabled 00:03:47.267 EAL: Heap on socket 0 was shrunk by 258MB 00:03:47.267 EAL: Trying to obtain current memory policy. 00:03:47.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.267 EAL: Restoring previous memory policy: 4 00:03:47.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.267 EAL: request: mp_malloc_sync 00:03:47.267 EAL: No shared files mode enabled, IPC is disabled 00:03:47.267 EAL: Heap on socket 0 was expanded by 514MB 00:03:47.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.267 EAL: request: mp_malloc_sync 00:03:47.267 EAL: No shared files mode enabled, IPC is disabled 00:03:47.267 EAL: Heap on socket 0 was shrunk by 514MB 00:03:47.267 EAL: Trying to obtain current memory policy. 00:03:47.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.526 EAL: Restoring previous memory policy: 4 00:03:47.526 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.526 EAL: request: mp_malloc_sync 00:03:47.526 EAL: No shared files mode enabled, IPC is disabled 00:03:47.526 EAL: Heap on socket 0 was expanded by 1026MB 00:03:47.526 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.786 passed 00:03:47.786 00:03:47.786 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.786 suites 1 1 n/a 0 0 00:03:47.786 tests 2 2 2 0 0 00:03:47.786 asserts 5302 5302 5302 0 n/a 00:03:47.786 00:03:47.786 Elapsed time = 0.672 seconds 00:03:47.786 EAL: request: mp_malloc_sync 00:03:47.786 EAL: No shared files mode enabled, IPC is disabled 00:03:47.786 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:47.786 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.786 EAL: request: mp_malloc_sync 00:03:47.786 EAL: No shared files mode enabled, IPC is disabled 00:03:47.786 EAL: Heap on socket 0 was shrunk by 2MB 00:03:47.786 EAL: No shared files mode enabled, IPC is disabled 00:03:47.786 EAL: No shared files mode enabled, IPC is disabled 00:03:47.786 EAL: No shared files mode enabled, IPC is disabled 00:03:47.786 00:03:47.786 real 0m0.864s 00:03:47.786 user 0m0.429s 00:03:47.786 sys 0m0.307s 00:03:47.786 15:02:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.786 15:02:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.786 ************************************ 00:03:47.786 END TEST env_vtophys 00:03:47.786 ************************************ 00:03:47.786 15:02:16 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:47.786 15:02:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.786 15:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.786 15:02:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.786 ************************************ 00:03:47.786 START TEST env_pci 00:03:47.786 ************************************ 00:03:47.786 15:02:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:47.786 00:03:47.786 00:03:47.786 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.786 http://cunit.sourceforge.net/ 00:03:47.786 00:03:47.786 00:03:47.786 Suite: pci 00:03:47.786 Test: pci_hook ...[2024-11-06 15:02:16.870815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53710 has claimed it 00:03:47.786 passed 00:03:47.786 00:03:47.786 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.786 suites 1 1 n/a 0 0 00:03:47.786 tests 1 1 1 0 0 00:03:47.786 asserts 25 25 25 0 n/a 00:03:47.786 00:03:47.786 Elapsed time = 0.002 seconds 00:03:47.786 EAL: Cannot find device (10000:00:01.0) 00:03:47.786 EAL: Failed to attach device on primary process 00:03:47.786 00:03:47.786 real 0m0.018s 00:03:47.786 user 0m0.009s 00:03:47.786 sys 0m0.009s 00:03:47.786 15:02:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.786 ************************************ 00:03:47.786 END TEST env_pci 00:03:47.786 ************************************ 00:03:47.786 15:02:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.786 15:02:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:47.786 15:02:16 -- env/env.sh@15 -- # uname 00:03:47.786 15:02:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:47.786 15:02:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:47.786 15:02:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:47.786 15:02:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:47.786 15:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.786 15:02:16 -- common/autotest_common.sh@10 -- # set +x 00:03:47.786 ************************************ 00:03:47.786 START TEST env_dpdk_post_init 00:03:47.786 ************************************ 00:03:47.786 15:02:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:47.786 EAL: Detected CPU lcores: 10 00:03:47.786 EAL: Detected NUMA nodes: 1 00:03:47.786 EAL: Detected shared linkage of DPDK 00:03:47.786 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:47.786 EAL: Selected IOVA mode 'PA' 00:03:48.046 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.046 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:48.046 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:48.046 Starting DPDK initialization... 00:03:48.046 Starting SPDK post initialization... 00:03:48.046 SPDK NVMe probe 00:03:48.046 Attaching to 0000:00:06.0 00:03:48.046 Attaching to 0000:00:07.0 00:03:48.046 Attached to 0000:00:06.0 00:03:48.046 Attached to 0000:00:07.0 00:03:48.046 Cleaning up... 00:03:48.046 00:03:48.046 real 0m0.180s 00:03:48.046 user 0m0.043s 00:03:48.046 sys 0m0.037s 00:03:48.046 15:02:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.046 ************************************ 00:03:48.046 END TEST env_dpdk_post_init 00:03:48.046 ************************************ 00:03:48.046 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:48.046 15:02:17 -- env/env.sh@26 -- # uname 00:03:48.046 15:02:17 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.046 15:02:17 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.046 15:02:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.046 15:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.047 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:48.047 ************************************ 00:03:48.047 START TEST env_mem_callbacks 00:03:48.047 ************************************ 00:03:48.047 15:02:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.047 EAL: Detected CPU lcores: 10 00:03:48.047 EAL: Detected NUMA nodes: 1 00:03:48.047 EAL: Detected shared linkage of DPDK 00:03:48.047 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.047 EAL: Selected IOVA mode 'PA' 00:03:48.047 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.047 00:03:48.047 00:03:48.047 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.047 http://cunit.sourceforge.net/ 00:03:48.047 00:03:48.047 00:03:48.047 Suite: memory 00:03:48.047 Test: test ... 00:03:48.047 register 0x200000200000 2097152 00:03:48.047 malloc 3145728 00:03:48.047 register 0x200000400000 4194304 00:03:48.047 buf 0x200000500000 len 3145728 PASSED 00:03:48.047 malloc 64 00:03:48.047 buf 0x2000004fff40 len 64 PASSED 00:03:48.047 malloc 4194304 00:03:48.047 register 0x200000800000 6291456 00:03:48.047 buf 0x200000a00000 len 4194304 PASSED 00:03:48.047 free 0x200000500000 3145728 00:03:48.047 free 0x2000004fff40 64 00:03:48.047 unregister 0x200000400000 4194304 PASSED 00:03:48.047 free 0x200000a00000 4194304 00:03:48.047 unregister 0x200000800000 6291456 PASSED 00:03:48.047 malloc 8388608 00:03:48.047 register 0x200000400000 10485760 00:03:48.047 buf 0x200000600000 len 8388608 PASSED 00:03:48.047 free 0x200000600000 8388608 00:03:48.047 unregister 0x200000400000 10485760 PASSED 00:03:48.047 passed 00:03:48.047 00:03:48.047 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.047 suites 1 1 n/a 0 0 00:03:48.047 tests 1 1 1 0 0 00:03:48.047 asserts 15 15 15 0 n/a 00:03:48.047 00:03:48.047 Elapsed time = 0.009 seconds 00:03:48.047 00:03:48.047 real 0m0.146s 00:03:48.047 user 0m0.017s 00:03:48.047 sys 0m0.026s 00:03:48.047 15:02:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.047 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:48.047 ************************************ 00:03:48.047 END TEST env_mem_callbacks 00:03:48.047 ************************************ 00:03:48.306 ************************************ 00:03:48.306 END TEST env 00:03:48.306 ************************************ 00:03:48.306 00:03:48.306 real 0m1.852s 00:03:48.306 user 0m0.908s 00:03:48.306 sys 0m0.602s 00:03:48.306 15:02:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.306 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:48.306 15:02:17 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:48.306 15:02:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.306 15:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.306 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:48.306 ************************************ 00:03:48.306 START TEST rpc 00:03:48.306 ************************************ 00:03:48.306 15:02:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:48.306 * Looking for test storage... 00:03:48.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:48.306 15:02:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.306 15:02:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.307 15:02:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.307 15:02:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.307 15:02:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.307 15:02:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.307 15:02:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.307 15:02:17 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.307 15:02:17 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.307 15:02:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.307 15:02:17 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.307 15:02:17 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.307 15:02:17 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.307 15:02:17 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.307 15:02:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.307 15:02:17 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.307 15:02:17 -- scripts/common.sh@344 -- # : 1 00:03:48.307 15:02:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.307 15:02:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.307 15:02:17 -- scripts/common.sh@364 -- # decimal 1 00:03:48.307 15:02:17 -- scripts/common.sh@352 -- # local d=1 00:03:48.307 15:02:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.307 15:02:17 -- scripts/common.sh@354 -- # echo 1 00:03:48.307 15:02:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.307 15:02:17 -- scripts/common.sh@365 -- # decimal 2 00:03:48.307 15:02:17 -- scripts/common.sh@352 -- # local d=2 00:03:48.307 15:02:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.307 15:02:17 -- scripts/common.sh@354 -- # echo 2 00:03:48.307 15:02:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.307 15:02:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.307 15:02:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.307 15:02:17 -- scripts/common.sh@367 -- # return 0 00:03:48.307 15:02:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.307 15:02:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.307 --rc genhtml_branch_coverage=1 00:03:48.307 --rc genhtml_function_coverage=1 00:03:48.307 --rc genhtml_legend=1 00:03:48.307 --rc geninfo_all_blocks=1 00:03:48.307 --rc geninfo_unexecuted_blocks=1 00:03:48.307 00:03:48.307 ' 00:03:48.307 15:02:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.307 --rc genhtml_branch_coverage=1 00:03:48.307 --rc genhtml_function_coverage=1 00:03:48.307 --rc genhtml_legend=1 00:03:48.307 --rc geninfo_all_blocks=1 00:03:48.307 --rc geninfo_unexecuted_blocks=1 00:03:48.307 00:03:48.307 ' 00:03:48.307 15:02:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.307 --rc genhtml_branch_coverage=1 00:03:48.307 --rc genhtml_function_coverage=1 00:03:48.307 --rc genhtml_legend=1 00:03:48.307 --rc geninfo_all_blocks=1 00:03:48.307 --rc geninfo_unexecuted_blocks=1 00:03:48.307 00:03:48.307 ' 00:03:48.307 15:02:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.307 --rc genhtml_branch_coverage=1 00:03:48.307 --rc genhtml_function_coverage=1 00:03:48.307 --rc genhtml_legend=1 00:03:48.307 --rc geninfo_all_blocks=1 00:03:48.307 --rc geninfo_unexecuted_blocks=1 00:03:48.307 00:03:48.307 ' 00:03:48.307 15:02:17 -- rpc/rpc.sh@65 -- # spdk_pid=53832 00:03:48.307 15:02:17 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.307 15:02:17 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:48.307 15:02:17 -- rpc/rpc.sh@67 -- # waitforlisten 53832 00:03:48.307 15:02:17 -- common/autotest_common.sh@829 -- # '[' -z 53832 ']' 00:03:48.307 15:02:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.307 15:02:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:48.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.307 15:02:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.307 15:02:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:48.307 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:48.567 [2024-11-06 15:02:17.634195] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:48.567 [2024-11-06 15:02:17.634326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53832 ] 00:03:48.567 [2024-11-06 15:02:17.777711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.567 [2024-11-06 15:02:17.834159] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:48.567 [2024-11-06 15:02:17.834300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:48.567 [2024-11-06 15:02:17.834314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53832' to capture a snapshot of events at runtime. 00:03:48.567 [2024-11-06 15:02:17.834323] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53832 for offline analysis/debug. 00:03:48.567 [2024-11-06 15:02:17.834353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.507 15:02:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:49.507 15:02:18 -- common/autotest_common.sh@862 -- # return 0 00:03:49.507 15:02:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:49.507 15:02:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:49.507 15:02:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.507 15:02:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.507 15:02:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.507 15:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.507 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.507 ************************************ 00:03:49.507 START TEST rpc_integrity 00:03:49.507 ************************************ 00:03:49.507 15:02:18 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:49.507 15:02:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.507 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.507 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.507 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.507 15:02:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.507 15:02:18 -- rpc/rpc.sh@13 -- # jq length 00:03:49.507 15:02:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.507 15:02:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.507 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.507 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.507 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.507 15:02:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.507 15:02:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.507 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.507 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.507 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.507 15:02:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.507 { 00:03:49.507 "name": "Malloc0", 00:03:49.507 "aliases": [ 00:03:49.507 "bb47998a-1b09-423d-9219-5568c66bbfd2" 00:03:49.507 ], 00:03:49.507 "product_name": "Malloc disk", 00:03:49.507 "block_size": 512, 00:03:49.507 "num_blocks": 16384, 00:03:49.507 "uuid": "bb47998a-1b09-423d-9219-5568c66bbfd2", 00:03:49.507 "assigned_rate_limits": { 00:03:49.507 "rw_ios_per_sec": 0, 00:03:49.507 "rw_mbytes_per_sec": 0, 00:03:49.507 "r_mbytes_per_sec": 0, 00:03:49.507 "w_mbytes_per_sec": 0 00:03:49.507 }, 00:03:49.507 "claimed": false, 00:03:49.507 "zoned": false, 00:03:49.507 "supported_io_types": { 00:03:49.507 "read": true, 00:03:49.507 "write": true, 00:03:49.507 "unmap": true, 00:03:49.507 "write_zeroes": true, 00:03:49.507 "flush": true, 00:03:49.507 "reset": true, 00:03:49.507 "compare": false, 00:03:49.507 "compare_and_write": false, 00:03:49.507 "abort": true, 00:03:49.507 "nvme_admin": false, 00:03:49.507 "nvme_io": false 00:03:49.507 }, 00:03:49.507 "memory_domains": [ 00:03:49.507 { 00:03:49.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.507 "dma_device_type": 2 00:03:49.507 } 00:03:49.507 ], 00:03:49.507 "driver_specific": {} 00:03:49.507 } 00:03:49.507 ]' 00:03:49.507 15:02:18 -- rpc/rpc.sh@17 -- # jq length 00:03:49.507 15:02:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.507 15:02:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.507 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.507 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.507 [2024-11-06 15:02:18.764568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.507 [2024-11-06 15:02:18.764630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.507 [2024-11-06 15:02:18.764650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7c44c0 00:03:49.507 [2024-11-06 15:02:18.764680] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.507 [2024-11-06 15:02:18.766270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.507 [2024-11-06 15:02:18.766304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.507 Passthru0 00:03:49.507 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.507 15:02:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.507 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.507 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.767 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.767 15:02:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.767 { 00:03:49.767 "name": "Malloc0", 00:03:49.767 "aliases": [ 00:03:49.767 "bb47998a-1b09-423d-9219-5568c66bbfd2" 00:03:49.767 ], 00:03:49.767 "product_name": "Malloc disk", 00:03:49.767 "block_size": 512, 00:03:49.767 "num_blocks": 16384, 00:03:49.767 "uuid": "bb47998a-1b09-423d-9219-5568c66bbfd2", 00:03:49.767 "assigned_rate_limits": { 00:03:49.767 "rw_ios_per_sec": 0, 00:03:49.767 "rw_mbytes_per_sec": 0, 00:03:49.767 "r_mbytes_per_sec": 0, 00:03:49.767 "w_mbytes_per_sec": 0 00:03:49.767 }, 00:03:49.767 "claimed": true, 00:03:49.767 "claim_type": "exclusive_write", 00:03:49.767 "zoned": false, 00:03:49.767 "supported_io_types": { 00:03:49.767 "read": true, 00:03:49.767 "write": true, 00:03:49.767 "unmap": true, 00:03:49.767 "write_zeroes": true, 00:03:49.767 "flush": true, 00:03:49.767 "reset": true, 00:03:49.767 "compare": false, 00:03:49.767 "compare_and_write": false, 00:03:49.767 "abort": true, 00:03:49.767 "nvme_admin": false, 00:03:49.767 "nvme_io": false 00:03:49.767 }, 00:03:49.767 "memory_domains": [ 00:03:49.767 { 00:03:49.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.767 "dma_device_type": 2 00:03:49.767 } 00:03:49.767 ], 00:03:49.767 "driver_specific": {} 00:03:49.767 }, 00:03:49.767 { 00:03:49.768 "name": "Passthru0", 00:03:49.768 "aliases": [ 00:03:49.768 "4d237352-991b-57a4-affa-c431282b9621" 00:03:49.768 ], 00:03:49.768 "product_name": "passthru", 00:03:49.768 "block_size": 512, 00:03:49.768 "num_blocks": 16384, 00:03:49.768 "uuid": "4d237352-991b-57a4-affa-c431282b9621", 00:03:49.768 "assigned_rate_limits": { 00:03:49.768 "rw_ios_per_sec": 0, 00:03:49.768 "rw_mbytes_per_sec": 0, 00:03:49.768 "r_mbytes_per_sec": 0, 00:03:49.768 "w_mbytes_per_sec": 0 00:03:49.768 }, 00:03:49.768 "claimed": false, 00:03:49.768 "zoned": false, 00:03:49.768 "supported_io_types": { 00:03:49.768 "read": true, 00:03:49.768 "write": true, 00:03:49.768 "unmap": true, 00:03:49.768 "write_zeroes": true, 00:03:49.768 "flush": true, 00:03:49.768 "reset": true, 00:03:49.768 "compare": false, 00:03:49.768 "compare_and_write": false, 00:03:49.768 "abort": true, 00:03:49.768 "nvme_admin": false, 00:03:49.768 "nvme_io": false 00:03:49.768 }, 00:03:49.768 "memory_domains": [ 00:03:49.768 { 00:03:49.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.768 "dma_device_type": 2 00:03:49.768 } 00:03:49.768 ], 00:03:49.768 "driver_specific": { 00:03:49.768 "passthru": { 00:03:49.768 "name": "Passthru0", 00:03:49.768 "base_bdev_name": "Malloc0" 00:03:49.768 } 00:03:49.768 } 00:03:49.768 } 00:03:49.768 ]' 00:03:49.768 15:02:18 -- rpc/rpc.sh@21 -- # jq length 00:03:49.768 15:02:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.768 15:02:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.768 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.768 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.768 15:02:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.768 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.768 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.768 15:02:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.768 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.768 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 15:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.768 15:02:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.768 15:02:18 -- rpc/rpc.sh@26 -- # jq length 00:03:49.768 15:02:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.768 00:03:49.768 real 0m0.332s 00:03:49.768 user 0m0.224s 00:03:49.768 sys 0m0.039s 00:03:49.768 15:02:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.768 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 ************************************ 00:03:49.768 END TEST rpc_integrity 00:03:49.768 ************************************ 00:03:49.768 15:02:18 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.768 15:02:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.768 15:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.768 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 ************************************ 00:03:49.768 START TEST rpc_plugins 00:03:49.768 ************************************ 00:03:49.768 15:02:18 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:03:49.768 15:02:18 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.768 15:02:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.768 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.768 15:02:19 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.768 15:02:19 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.768 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.768 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:49.768 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.768 15:02:19 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.768 { 00:03:49.768 "name": "Malloc1", 00:03:49.768 "aliases": [ 00:03:49.768 "9c3a193a-010e-4b43-9837-c652ae80601a" 00:03:49.768 ], 00:03:49.768 "product_name": "Malloc disk", 00:03:49.768 "block_size": 4096, 00:03:49.768 "num_blocks": 256, 00:03:49.768 "uuid": "9c3a193a-010e-4b43-9837-c652ae80601a", 00:03:49.768 "assigned_rate_limits": { 00:03:49.768 "rw_ios_per_sec": 0, 00:03:49.768 "rw_mbytes_per_sec": 0, 00:03:49.768 "r_mbytes_per_sec": 0, 00:03:49.768 "w_mbytes_per_sec": 0 00:03:49.768 }, 00:03:49.768 "claimed": false, 00:03:49.768 "zoned": false, 00:03:49.768 "supported_io_types": { 00:03:49.768 "read": true, 00:03:49.768 "write": true, 00:03:49.768 "unmap": true, 00:03:49.768 "write_zeroes": true, 00:03:49.768 "flush": true, 00:03:49.768 "reset": true, 00:03:49.768 "compare": false, 00:03:49.768 "compare_and_write": false, 00:03:49.768 "abort": true, 00:03:49.768 "nvme_admin": false, 00:03:49.768 "nvme_io": false 00:03:49.768 }, 00:03:49.768 "memory_domains": [ 00:03:49.768 { 00:03:49.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.768 "dma_device_type": 2 00:03:49.768 } 00:03:49.768 ], 00:03:49.768 "driver_specific": {} 00:03:49.768 } 00:03:49.768 ]' 00:03:49.768 15:02:19 -- rpc/rpc.sh@32 -- # jq length 00:03:50.028 15:02:19 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:50.028 15:02:19 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:50.028 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.028 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.028 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.028 15:02:19 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:50.028 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.028 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.028 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.028 15:02:19 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:50.028 15:02:19 -- rpc/rpc.sh@36 -- # jq length 00:03:50.028 15:02:19 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:50.028 00:03:50.028 real 0m0.154s 00:03:50.028 user 0m0.106s 00:03:50.028 sys 0m0.016s 00:03:50.028 15:02:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.028 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.028 ************************************ 00:03:50.028 END TEST rpc_plugins 00:03:50.028 ************************************ 00:03:50.028 15:02:19 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:50.028 15:02:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.028 15:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.028 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.028 ************************************ 00:03:50.028 START TEST rpc_trace_cmd_test 00:03:50.028 ************************************ 00:03:50.028 15:02:19 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:03:50.028 15:02:19 -- rpc/rpc.sh@40 -- # local info 00:03:50.028 15:02:19 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:50.028 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.028 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.029 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.029 15:02:19 -- rpc/rpc.sh@42 -- # info='{ 00:03:50.029 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53832", 00:03:50.029 "tpoint_group_mask": "0x8", 00:03:50.029 "iscsi_conn": { 00:03:50.029 "mask": "0x2", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "scsi": { 00:03:50.029 "mask": "0x4", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "bdev": { 00:03:50.029 "mask": "0x8", 00:03:50.029 "tpoint_mask": "0xffffffffffffffff" 00:03:50.029 }, 00:03:50.029 "nvmf_rdma": { 00:03:50.029 "mask": "0x10", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "nvmf_tcp": { 00:03:50.029 "mask": "0x20", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "ftl": { 00:03:50.029 "mask": "0x40", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "blobfs": { 00:03:50.029 "mask": "0x80", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "dsa": { 00:03:50.029 "mask": "0x200", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "thread": { 00:03:50.029 "mask": "0x400", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "nvme_pcie": { 00:03:50.029 "mask": "0x800", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "iaa": { 00:03:50.029 "mask": "0x1000", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "nvme_tcp": { 00:03:50.029 "mask": "0x2000", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 }, 00:03:50.029 "bdev_nvme": { 00:03:50.029 "mask": "0x4000", 00:03:50.029 "tpoint_mask": "0x0" 00:03:50.029 } 00:03:50.029 }' 00:03:50.029 15:02:19 -- rpc/rpc.sh@43 -- # jq length 00:03:50.029 15:02:19 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:50.029 15:02:19 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:50.289 15:02:19 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:50.289 15:02:19 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:50.289 15:02:19 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:50.289 15:02:19 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:50.289 15:02:19 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:50.289 15:02:19 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:50.289 15:02:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:50.289 00:03:50.289 real 0m0.279s 00:03:50.289 user 0m0.242s 00:03:50.289 sys 0m0.025s 00:03:50.289 15:02:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.289 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.289 ************************************ 00:03:50.289 END TEST rpc_trace_cmd_test 00:03:50.289 ************************************ 00:03:50.289 15:02:19 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:50.289 15:02:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:50.289 15:02:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:50.289 15:02:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.289 15:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.289 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.289 ************************************ 00:03:50.289 START TEST rpc_daemon_integrity 00:03:50.289 ************************************ 00:03:50.289 15:02:19 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:50.289 15:02:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.289 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.289 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.289 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.289 15:02:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.289 15:02:19 -- rpc/rpc.sh@13 -- # jq length 00:03:50.548 15:02:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.548 15:02:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.548 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.548 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.548 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.548 15:02:19 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:50.548 15:02:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.548 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.548 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.548 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.548 15:02:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.548 { 00:03:50.548 "name": "Malloc2", 00:03:50.548 "aliases": [ 00:03:50.548 "e391f59e-0636-4106-9214-ee70d4f425cf" 00:03:50.548 ], 00:03:50.548 "product_name": "Malloc disk", 00:03:50.548 "block_size": 512, 00:03:50.548 "num_blocks": 16384, 00:03:50.548 "uuid": "e391f59e-0636-4106-9214-ee70d4f425cf", 00:03:50.548 "assigned_rate_limits": { 00:03:50.548 "rw_ios_per_sec": 0, 00:03:50.548 "rw_mbytes_per_sec": 0, 00:03:50.548 "r_mbytes_per_sec": 0, 00:03:50.548 "w_mbytes_per_sec": 0 00:03:50.548 }, 00:03:50.548 "claimed": false, 00:03:50.548 "zoned": false, 00:03:50.548 "supported_io_types": { 00:03:50.548 "read": true, 00:03:50.548 "write": true, 00:03:50.548 "unmap": true, 00:03:50.548 "write_zeroes": true, 00:03:50.548 "flush": true, 00:03:50.548 "reset": true, 00:03:50.548 "compare": false, 00:03:50.548 "compare_and_write": false, 00:03:50.548 "abort": true, 00:03:50.548 "nvme_admin": false, 00:03:50.548 "nvme_io": false 00:03:50.548 }, 00:03:50.548 "memory_domains": [ 00:03:50.548 { 00:03:50.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.548 "dma_device_type": 2 00:03:50.548 } 00:03:50.548 ], 00:03:50.548 "driver_specific": {} 00:03:50.548 } 00:03:50.548 ]' 00:03:50.548 15:02:19 -- rpc/rpc.sh@17 -- # jq length 00:03:50.548 15:02:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.548 15:02:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:50.548 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.548 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.548 [2024-11-06 15:02:19.680870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:50.548 [2024-11-06 15:02:19.680927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.548 [2024-11-06 15:02:19.680951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7c41c0 00:03:50.549 [2024-11-06 15:02:19.680961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.549 [2024-11-06 15:02:19.682358] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.549 [2024-11-06 15:02:19.682392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.549 Passthru0 00:03:50.549 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.549 15:02:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.549 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.549 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.549 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.549 15:02:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.549 { 00:03:50.549 "name": "Malloc2", 00:03:50.549 "aliases": [ 00:03:50.549 "e391f59e-0636-4106-9214-ee70d4f425cf" 00:03:50.549 ], 00:03:50.549 "product_name": "Malloc disk", 00:03:50.549 "block_size": 512, 00:03:50.549 "num_blocks": 16384, 00:03:50.549 "uuid": "e391f59e-0636-4106-9214-ee70d4f425cf", 00:03:50.549 "assigned_rate_limits": { 00:03:50.549 "rw_ios_per_sec": 0, 00:03:50.549 "rw_mbytes_per_sec": 0, 00:03:50.549 "r_mbytes_per_sec": 0, 00:03:50.549 "w_mbytes_per_sec": 0 00:03:50.549 }, 00:03:50.549 "claimed": true, 00:03:50.549 "claim_type": "exclusive_write", 00:03:50.549 "zoned": false, 00:03:50.549 "supported_io_types": { 00:03:50.549 "read": true, 00:03:50.549 "write": true, 00:03:50.549 "unmap": true, 00:03:50.549 "write_zeroes": true, 00:03:50.549 "flush": true, 00:03:50.549 "reset": true, 00:03:50.549 "compare": false, 00:03:50.549 "compare_and_write": false, 00:03:50.549 "abort": true, 00:03:50.549 "nvme_admin": false, 00:03:50.549 "nvme_io": false 00:03:50.549 }, 00:03:50.549 "memory_domains": [ 00:03:50.549 { 00:03:50.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.549 "dma_device_type": 2 00:03:50.549 } 00:03:50.549 ], 00:03:50.549 "driver_specific": {} 00:03:50.549 }, 00:03:50.549 { 00:03:50.549 "name": "Passthru0", 00:03:50.549 "aliases": [ 00:03:50.549 "0ab7c127-22ae-52a0-a3c3-489121880d25" 00:03:50.549 ], 00:03:50.549 "product_name": "passthru", 00:03:50.549 "block_size": 512, 00:03:50.549 "num_blocks": 16384, 00:03:50.549 "uuid": "0ab7c127-22ae-52a0-a3c3-489121880d25", 00:03:50.549 "assigned_rate_limits": { 00:03:50.549 "rw_ios_per_sec": 0, 00:03:50.549 "rw_mbytes_per_sec": 0, 00:03:50.549 "r_mbytes_per_sec": 0, 00:03:50.549 "w_mbytes_per_sec": 0 00:03:50.549 }, 00:03:50.549 "claimed": false, 00:03:50.549 "zoned": false, 00:03:50.549 "supported_io_types": { 00:03:50.549 "read": true, 00:03:50.549 "write": true, 00:03:50.549 "unmap": true, 00:03:50.549 "write_zeroes": true, 00:03:50.549 "flush": true, 00:03:50.549 "reset": true, 00:03:50.549 "compare": false, 00:03:50.549 "compare_and_write": false, 00:03:50.549 "abort": true, 00:03:50.549 "nvme_admin": false, 00:03:50.549 "nvme_io": false 00:03:50.549 }, 00:03:50.549 "memory_domains": [ 00:03:50.549 { 00:03:50.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.549 "dma_device_type": 2 00:03:50.549 } 00:03:50.549 ], 00:03:50.549 "driver_specific": { 00:03:50.549 "passthru": { 00:03:50.549 "name": "Passthru0", 00:03:50.549 "base_bdev_name": "Malloc2" 00:03:50.549 } 00:03:50.549 } 00:03:50.549 } 00:03:50.549 ]' 00:03:50.549 15:02:19 -- rpc/rpc.sh@21 -- # jq length 00:03:50.549 15:02:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.549 15:02:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.549 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.549 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.549 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.549 15:02:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:50.549 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.549 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.549 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.549 15:02:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.549 15:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.549 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.549 15:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.549 15:02:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.549 15:02:19 -- rpc/rpc.sh@26 -- # jq length 00:03:50.808 15:02:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.808 00:03:50.808 real 0m0.323s 00:03:50.808 user 0m0.221s 00:03:50.808 sys 0m0.037s 00:03:50.808 15:02:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.808 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.808 ************************************ 00:03:50.808 END TEST rpc_daemon_integrity 00:03:50.808 ************************************ 00:03:50.808 15:02:19 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:50.808 15:02:19 -- rpc/rpc.sh@84 -- # killprocess 53832 00:03:50.808 15:02:19 -- common/autotest_common.sh@936 -- # '[' -z 53832 ']' 00:03:50.808 15:02:19 -- common/autotest_common.sh@940 -- # kill -0 53832 00:03:50.808 15:02:19 -- common/autotest_common.sh@941 -- # uname 00:03:50.808 15:02:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:50.808 15:02:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53832 00:03:50.808 15:02:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:50.808 15:02:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:50.808 15:02:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53832' 00:03:50.808 killing process with pid 53832 00:03:50.808 15:02:19 -- common/autotest_common.sh@955 -- # kill 53832 00:03:50.808 15:02:19 -- common/autotest_common.sh@960 -- # wait 53832 00:03:51.068 00:03:51.068 real 0m2.801s 00:03:51.068 user 0m3.783s 00:03:51.068 sys 0m0.557s 00:03:51.068 15:02:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.068 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.068 ************************************ 00:03:51.068 END TEST rpc 00:03:51.068 ************************************ 00:03:51.068 15:02:20 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:51.068 15:02:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.068 15:02:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.068 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.068 ************************************ 00:03:51.068 START TEST rpc_client 00:03:51.068 ************************************ 00:03:51.068 15:02:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:51.068 * Looking for test storage... 00:03:51.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:51.068 15:02:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.068 15:02:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.068 15:02:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.398 15:02:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.398 15:02:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.398 15:02:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.398 15:02:20 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.398 15:02:20 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.398 15:02:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.398 15:02:20 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.398 15:02:20 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.398 15:02:20 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.398 15:02:20 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.398 15:02:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.398 15:02:20 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.398 15:02:20 -- scripts/common.sh@344 -- # : 1 00:03:51.398 15:02:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.398 15:02:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.398 15:02:20 -- scripts/common.sh@364 -- # decimal 1 00:03:51.398 15:02:20 -- scripts/common.sh@352 -- # local d=1 00:03:51.398 15:02:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.398 15:02:20 -- scripts/common.sh@354 -- # echo 1 00:03:51.398 15:02:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.398 15:02:20 -- scripts/common.sh@365 -- # decimal 2 00:03:51.398 15:02:20 -- scripts/common.sh@352 -- # local d=2 00:03:51.398 15:02:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.398 15:02:20 -- scripts/common.sh@354 -- # echo 2 00:03:51.398 15:02:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.398 15:02:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.398 15:02:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.398 15:02:20 -- scripts/common.sh@367 -- # return 0 00:03:51.398 15:02:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.398 --rc genhtml_branch_coverage=1 00:03:51.398 --rc genhtml_function_coverage=1 00:03:51.398 --rc genhtml_legend=1 00:03:51.398 --rc geninfo_all_blocks=1 00:03:51.398 --rc geninfo_unexecuted_blocks=1 00:03:51.398 00:03:51.398 ' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.398 --rc genhtml_branch_coverage=1 00:03:51.398 --rc genhtml_function_coverage=1 00:03:51.398 --rc genhtml_legend=1 00:03:51.398 --rc geninfo_all_blocks=1 00:03:51.398 --rc geninfo_unexecuted_blocks=1 00:03:51.398 00:03:51.398 ' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.398 --rc genhtml_branch_coverage=1 00:03:51.398 --rc genhtml_function_coverage=1 00:03:51.398 --rc genhtml_legend=1 00:03:51.398 --rc geninfo_all_blocks=1 00:03:51.398 --rc geninfo_unexecuted_blocks=1 00:03:51.398 00:03:51.398 ' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.398 --rc genhtml_branch_coverage=1 00:03:51.398 --rc genhtml_function_coverage=1 00:03:51.398 --rc genhtml_legend=1 00:03:51.398 --rc geninfo_all_blocks=1 00:03:51.398 --rc geninfo_unexecuted_blocks=1 00:03:51.398 00:03:51.398 ' 00:03:51.398 15:02:20 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:51.398 OK 00:03:51.398 15:02:20 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:51.398 00:03:51.398 real 0m0.190s 00:03:51.398 user 0m0.112s 00:03:51.398 sys 0m0.089s 00:03:51.398 15:02:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.398 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.398 ************************************ 00:03:51.398 END TEST rpc_client 00:03:51.398 ************************************ 00:03:51.398 15:02:20 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:51.398 15:02:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.398 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.398 ************************************ 00:03:51.398 START TEST json_config 00:03:51.398 ************************************ 00:03:51.398 15:02:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:51.398 15:02:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.398 15:02:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.398 15:02:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.398 15:02:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.398 15:02:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.398 15:02:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.398 15:02:20 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.398 15:02:20 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.398 15:02:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.398 15:02:20 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.398 15:02:20 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.398 15:02:20 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.398 15:02:20 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.398 15:02:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.398 15:02:20 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.398 15:02:20 -- scripts/common.sh@344 -- # : 1 00:03:51.398 15:02:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.398 15:02:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.398 15:02:20 -- scripts/common.sh@364 -- # decimal 1 00:03:51.398 15:02:20 -- scripts/common.sh@352 -- # local d=1 00:03:51.398 15:02:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.398 15:02:20 -- scripts/common.sh@354 -- # echo 1 00:03:51.398 15:02:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.398 15:02:20 -- scripts/common.sh@365 -- # decimal 2 00:03:51.398 15:02:20 -- scripts/common.sh@352 -- # local d=2 00:03:51.398 15:02:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.398 15:02:20 -- scripts/common.sh@354 -- # echo 2 00:03:51.398 15:02:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.398 15:02:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.398 15:02:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.398 15:02:20 -- scripts/common.sh@367 -- # return 0 00:03:51.398 15:02:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.398 15:02:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.398 --rc genhtml_branch_coverage=1 00:03:51.399 --rc genhtml_function_coverage=1 00:03:51.399 --rc genhtml_legend=1 00:03:51.399 --rc geninfo_all_blocks=1 00:03:51.399 --rc geninfo_unexecuted_blocks=1 00:03:51.399 00:03:51.399 ' 00:03:51.399 15:02:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.399 --rc genhtml_branch_coverage=1 00:03:51.399 --rc genhtml_function_coverage=1 00:03:51.399 --rc genhtml_legend=1 00:03:51.399 --rc geninfo_all_blocks=1 00:03:51.399 --rc geninfo_unexecuted_blocks=1 00:03:51.399 00:03:51.399 ' 00:03:51.399 15:02:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.399 --rc genhtml_branch_coverage=1 00:03:51.399 --rc genhtml_function_coverage=1 00:03:51.399 --rc genhtml_legend=1 00:03:51.399 --rc geninfo_all_blocks=1 00:03:51.399 --rc geninfo_unexecuted_blocks=1 00:03:51.399 00:03:51.399 ' 00:03:51.399 15:02:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.399 --rc genhtml_branch_coverage=1 00:03:51.399 --rc genhtml_function_coverage=1 00:03:51.399 --rc genhtml_legend=1 00:03:51.399 --rc geninfo_all_blocks=1 00:03:51.399 --rc geninfo_unexecuted_blocks=1 00:03:51.399 00:03:51.399 ' 00:03:51.399 15:02:20 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:51.399 15:02:20 -- nvmf/common.sh@7 -- # uname -s 00:03:51.399 15:02:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.399 15:02:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.399 15:02:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.399 15:02:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.399 15:02:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.399 15:02:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.399 15:02:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.399 15:02:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.399 15:02:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.399 15:02:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.399 15:02:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:03:51.399 15:02:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:03:51.399 15:02:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.399 15:02:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.399 15:02:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:51.399 15:02:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.399 15:02:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.399 15:02:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.399 15:02:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.399 15:02:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.399 15:02:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.399 15:02:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.399 15:02:20 -- paths/export.sh@5 -- # export PATH 00:03:51.399 15:02:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.399 15:02:20 -- nvmf/common.sh@46 -- # : 0 00:03:51.399 15:02:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:51.399 15:02:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:51.399 15:02:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:51.399 15:02:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.399 15:02:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.399 15:02:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:51.399 15:02:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:51.399 15:02:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:51.399 15:02:20 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:51.399 15:02:20 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:51.399 15:02:20 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:51.399 15:02:20 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:51.399 15:02:20 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:51.399 15:02:20 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:51.399 15:02:20 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:51.399 15:02:20 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:51.399 15:02:20 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:51.399 15:02:20 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:51.399 15:02:20 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:51.399 15:02:20 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:51.399 15:02:20 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:51.681 15:02:20 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:51.682 15:02:20 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:51.682 INFO: JSON configuration test init 00:03:51.682 15:02:20 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:51.682 15:02:20 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:51.682 15:02:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.682 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.682 15:02:20 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:51.682 15:02:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.682 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.682 Waiting for target to run... 00:03:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:51.682 15:02:20 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:51.682 15:02:20 -- json_config/json_config.sh@98 -- # local app=target 00:03:51.682 15:02:20 -- json_config/json_config.sh@99 -- # shift 00:03:51.682 15:02:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:51.682 15:02:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:51.682 15:02:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:51.682 15:02:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:51.682 15:02:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:51.682 15:02:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=54085 00:03:51.682 15:02:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:51.682 15:02:20 -- json_config/json_config.sh@114 -- # waitforlisten 54085 /var/tmp/spdk_tgt.sock 00:03:51.682 15:02:20 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:51.682 15:02:20 -- common/autotest_common.sh@829 -- # '[' -z 54085 ']' 00:03:51.682 15:02:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:51.682 15:02:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:51.682 15:02:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:51.682 15:02:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:51.682 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:51.682 [2024-11-06 15:02:20.722557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:51.682 [2024-11-06 15:02:20.722681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54085 ] 00:03:51.941 [2024-11-06 15:02:21.014266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.941 [2024-11-06 15:02:21.057998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:51.941 [2024-11-06 15:02:21.058181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.508 00:03:52.508 15:02:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:52.508 15:02:21 -- common/autotest_common.sh@862 -- # return 0 00:03:52.508 15:02:21 -- json_config/json_config.sh@115 -- # echo '' 00:03:52.508 15:02:21 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:52.508 15:02:21 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:52.508 15:02:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.508 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:52.508 15:02:21 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:52.508 15:02:21 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:52.508 15:02:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.508 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:52.508 15:02:21 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:52.508 15:02:21 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:52.508 15:02:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:53.076 15:02:22 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:53.076 15:02:22 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:53.076 15:02:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.076 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:03:53.076 15:02:22 -- json_config/json_config.sh@48 -- # local ret=0 00:03:53.076 15:02:22 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:53.076 15:02:22 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:53.076 15:02:22 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:53.076 15:02:22 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:53.076 15:02:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:53.336 15:02:22 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:53.336 15:02:22 -- json_config/json_config.sh@51 -- # local get_types 00:03:53.336 15:02:22 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:53.336 15:02:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.336 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:03:53.336 15:02:22 -- json_config/json_config.sh@58 -- # return 0 00:03:53.336 15:02:22 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:53.336 15:02:22 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:53.336 15:02:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.336 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:03:53.336 15:02:22 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:53.336 15:02:22 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:53.336 15:02:22 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.336 15:02:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.595 MallocForNvmf0 00:03:53.595 15:02:22 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.595 15:02:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.855 MallocForNvmf1 00:03:53.855 15:02:23 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:53.855 15:02:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.114 [2024-11-06 15:02:23.359290] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.114 15:02:23 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.114 15:02:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.373 15:02:23 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.373 15:02:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.632 15:02:23 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.632 15:02:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.891 15:02:24 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:54.891 15:02:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.151 [2024-11-06 15:02:24.267747] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.151 15:02:24 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:03:55.151 15:02:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.151 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:03:55.151 15:02:24 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:55.151 15:02:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.151 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:03:55.151 15:02:24 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:55.151 15:02:24 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.151 15:02:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.410 MallocBdevForConfigChangeCheck 00:03:55.410 15:02:24 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:55.410 15:02:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.410 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:03:55.410 15:02:24 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:55.410 15:02:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.979 INFO: shutting down applications... 00:03:55.979 15:02:25 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:55.979 15:02:25 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:55.979 15:02:25 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:55.979 15:02:25 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:55.979 15:02:25 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:56.238 Calling clear_iscsi_subsystem 00:03:56.238 Calling clear_nvmf_subsystem 00:03:56.238 Calling clear_nbd_subsystem 00:03:56.238 Calling clear_ublk_subsystem 00:03:56.238 Calling clear_vhost_blk_subsystem 00:03:56.238 Calling clear_vhost_scsi_subsystem 00:03:56.238 Calling clear_scheduler_subsystem 00:03:56.238 Calling clear_bdev_subsystem 00:03:56.238 Calling clear_accel_subsystem 00:03:56.238 Calling clear_vmd_subsystem 00:03:56.238 Calling clear_sock_subsystem 00:03:56.238 Calling clear_iobuf_subsystem 00:03:56.238 15:02:25 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:56.238 15:02:25 -- json_config/json_config.sh@396 -- # count=100 00:03:56.238 15:02:25 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:56.238 15:02:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.239 15:02:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:56.239 15:02:25 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:56.498 15:02:25 -- json_config/json_config.sh@398 -- # break 00:03:56.498 15:02:25 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:56.498 15:02:25 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:56.498 15:02:25 -- json_config/json_config.sh@120 -- # local app=target 00:03:56.498 15:02:25 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:56.498 15:02:25 -- json_config/json_config.sh@124 -- # [[ -n 54085 ]] 00:03:56.499 15:02:25 -- json_config/json_config.sh@127 -- # kill -SIGINT 54085 00:03:56.499 15:02:25 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:56.499 15:02:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:56.499 15:02:25 -- json_config/json_config.sh@130 -- # kill -0 54085 00:03:56.499 15:02:25 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:57.069 15:02:26 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:57.069 15:02:26 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:57.069 15:02:26 -- json_config/json_config.sh@130 -- # kill -0 54085 00:03:57.069 15:02:26 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:57.069 15:02:26 -- json_config/json_config.sh@132 -- # break 00:03:57.069 15:02:26 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:57.069 SPDK target shutdown done 00:03:57.069 15:02:26 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:57.069 INFO: relaunching applications... 00:03:57.069 15:02:26 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:57.069 15:02:26 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:57.069 15:02:26 -- json_config/json_config.sh@98 -- # local app=target 00:03:57.069 15:02:26 -- json_config/json_config.sh@99 -- # shift 00:03:57.069 15:02:26 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:57.069 15:02:26 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:57.069 15:02:26 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:57.069 15:02:26 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:57.069 15:02:26 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:57.069 15:02:26 -- json_config/json_config.sh@111 -- # app_pid[$app]=54270 00:03:57.069 15:02:26 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:57.069 Waiting for target to run... 00:03:57.069 15:02:26 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:57.069 15:02:26 -- json_config/json_config.sh@114 -- # waitforlisten 54270 /var/tmp/spdk_tgt.sock 00:03:57.069 15:02:26 -- common/autotest_common.sh@829 -- # '[' -z 54270 ']' 00:03:57.069 15:02:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.069 15:02:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:57.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.069 15:02:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.069 15:02:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:57.069 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:03:57.069 [2024-11-06 15:02:26.329189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:57.069 [2024-11-06 15:02:26.329292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54270 ] 00:03:57.637 [2024-11-06 15:02:26.631717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.637 [2024-11-06 15:02:26.673921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:57.637 [2024-11-06 15:02:26.674108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.897 [2024-11-06 15:02:26.970610] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.897 [2024-11-06 15:02:27.002612] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:58.156 15:02:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.156 15:02:27 -- common/autotest_common.sh@862 -- # return 0 00:03:58.156 00:03:58.156 15:02:27 -- json_config/json_config.sh@115 -- # echo '' 00:03:58.156 15:02:27 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:58.156 INFO: Checking if target configuration is the same... 00:03:58.156 15:02:27 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:58.157 15:02:27 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:58.157 15:02:27 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:58.157 15:02:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.157 + '[' 2 -ne 2 ']' 00:03:58.157 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:58.157 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:58.157 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:58.157 +++ basename /dev/fd/62 00:03:58.157 ++ mktemp /tmp/62.XXX 00:03:58.157 + tmp_file_1=/tmp/62.kRO 00:03:58.157 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:58.157 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.157 + tmp_file_2=/tmp/spdk_tgt_config.json.zow 00:03:58.157 + ret=0 00:03:58.157 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:58.416 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:58.416 + diff -u /tmp/62.kRO /tmp/spdk_tgt_config.json.zow 00:03:58.416 INFO: JSON config files are the same 00:03:58.416 + echo 'INFO: JSON config files are the same' 00:03:58.416 + rm /tmp/62.kRO /tmp/spdk_tgt_config.json.zow 00:03:58.416 + exit 0 00:03:58.416 INFO: changing configuration and checking if this can be detected... 00:03:58.416 15:02:27 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:58.416 15:02:27 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:58.416 15:02:27 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.416 15:02:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.676 15:02:27 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:58.676 15:02:27 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:58.676 15:02:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.676 + '[' 2 -ne 2 ']' 00:03:58.676 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:58.676 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:58.676 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:58.676 +++ basename /dev/fd/62 00:03:58.676 ++ mktemp /tmp/62.XXX 00:03:58.935 + tmp_file_1=/tmp/62.D3m 00:03:58.935 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:58.935 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.935 + tmp_file_2=/tmp/spdk_tgt_config.json.KBY 00:03:58.935 + ret=0 00:03:58.935 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:59.194 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:59.194 + diff -u /tmp/62.D3m /tmp/spdk_tgt_config.json.KBY 00:03:59.194 + ret=1 00:03:59.194 + echo '=== Start of file: /tmp/62.D3m ===' 00:03:59.194 + cat /tmp/62.D3m 00:03:59.194 + echo '=== End of file: /tmp/62.D3m ===' 00:03:59.194 + echo '' 00:03:59.194 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KBY ===' 00:03:59.194 + cat /tmp/spdk_tgt_config.json.KBY 00:03:59.194 + echo '=== End of file: /tmp/spdk_tgt_config.json.KBY ===' 00:03:59.194 + echo '' 00:03:59.194 + rm /tmp/62.D3m /tmp/spdk_tgt_config.json.KBY 00:03:59.194 + exit 1 00:03:59.194 INFO: configuration change detected. 00:03:59.194 15:02:28 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:59.194 15:02:28 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:59.194 15:02:28 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:59.194 15:02:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.194 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.194 15:02:28 -- json_config/json_config.sh@360 -- # local ret=0 00:03:59.194 15:02:28 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:59.194 15:02:28 -- json_config/json_config.sh@370 -- # [[ -n 54270 ]] 00:03:59.194 15:02:28 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:59.194 15:02:28 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:59.194 15:02:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.194 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.194 15:02:28 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:03:59.194 15:02:28 -- json_config/json_config.sh@246 -- # uname -s 00:03:59.194 15:02:28 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:03:59.194 15:02:28 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:03:59.194 15:02:28 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:59.194 15:02:28 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:59.194 15:02:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.194 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.454 15:02:28 -- json_config/json_config.sh@376 -- # killprocess 54270 00:03:59.454 15:02:28 -- common/autotest_common.sh@936 -- # '[' -z 54270 ']' 00:03:59.454 15:02:28 -- common/autotest_common.sh@940 -- # kill -0 54270 00:03:59.454 15:02:28 -- common/autotest_common.sh@941 -- # uname 00:03:59.454 15:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:59.454 15:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54270 00:03:59.454 killing process with pid 54270 00:03:59.454 15:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:59.454 15:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:59.454 15:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54270' 00:03:59.454 15:02:28 -- common/autotest_common.sh@955 -- # kill 54270 00:03:59.454 15:02:28 -- common/autotest_common.sh@960 -- # wait 54270 00:03:59.454 15:02:28 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:59.454 15:02:28 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:59.454 15:02:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.454 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.714 INFO: Success 00:03:59.714 15:02:28 -- json_config/json_config.sh@381 -- # return 0 00:03:59.714 15:02:28 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:59.714 00:03:59.714 real 0m8.269s 00:03:59.714 user 0m12.054s 00:03:59.714 sys 0m1.420s 00:03:59.714 ************************************ 00:03:59.714 END TEST json_config 00:03:59.714 ************************************ 00:03:59.715 15:02:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.715 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.715 15:02:28 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:59.715 15:02:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.715 15:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.715 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.715 ************************************ 00:03:59.715 START TEST json_config_extra_key 00:03:59.715 ************************************ 00:03:59.715 15:02:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:59.715 15:02:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.715 15:02:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.715 15:02:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.715 15:02:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.715 15:02:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.715 15:02:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.715 15:02:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.715 15:02:28 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.715 15:02:28 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.715 15:02:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.715 15:02:28 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.715 15:02:28 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.715 15:02:28 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.715 15:02:28 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.715 15:02:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.715 15:02:28 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.715 15:02:28 -- scripts/common.sh@344 -- # : 1 00:03:59.715 15:02:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.715 15:02:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.715 15:02:28 -- scripts/common.sh@364 -- # decimal 1 00:03:59.715 15:02:28 -- scripts/common.sh@352 -- # local d=1 00:03:59.715 15:02:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.715 15:02:28 -- scripts/common.sh@354 -- # echo 1 00:03:59.715 15:02:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.715 15:02:28 -- scripts/common.sh@365 -- # decimal 2 00:03:59.715 15:02:28 -- scripts/common.sh@352 -- # local d=2 00:03:59.715 15:02:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.715 15:02:28 -- scripts/common.sh@354 -- # echo 2 00:03:59.715 15:02:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.715 15:02:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.715 15:02:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.715 15:02:28 -- scripts/common.sh@367 -- # return 0 00:03:59.715 15:02:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.715 15:02:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.715 --rc genhtml_branch_coverage=1 00:03:59.715 --rc genhtml_function_coverage=1 00:03:59.715 --rc genhtml_legend=1 00:03:59.715 --rc geninfo_all_blocks=1 00:03:59.715 --rc geninfo_unexecuted_blocks=1 00:03:59.715 00:03:59.715 ' 00:03:59.715 15:02:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.715 --rc genhtml_branch_coverage=1 00:03:59.715 --rc genhtml_function_coverage=1 00:03:59.715 --rc genhtml_legend=1 00:03:59.715 --rc geninfo_all_blocks=1 00:03:59.715 --rc geninfo_unexecuted_blocks=1 00:03:59.715 00:03:59.715 ' 00:03:59.715 15:02:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.715 --rc genhtml_branch_coverage=1 00:03:59.715 --rc genhtml_function_coverage=1 00:03:59.715 --rc genhtml_legend=1 00:03:59.715 --rc geninfo_all_blocks=1 00:03:59.715 --rc geninfo_unexecuted_blocks=1 00:03:59.715 00:03:59.715 ' 00:03:59.715 15:02:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.715 --rc genhtml_branch_coverage=1 00:03:59.715 --rc genhtml_function_coverage=1 00:03:59.715 --rc genhtml_legend=1 00:03:59.715 --rc geninfo_all_blocks=1 00:03:59.715 --rc geninfo_unexecuted_blocks=1 00:03:59.715 00:03:59.715 ' 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.715 15:02:28 -- nvmf/common.sh@7 -- # uname -s 00:03:59.715 15:02:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.715 15:02:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.715 15:02:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.715 15:02:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.715 15:02:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.715 15:02:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.715 15:02:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.715 15:02:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.715 15:02:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.715 15:02:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.715 15:02:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:03:59.715 15:02:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:03:59.715 15:02:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.715 15:02:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.715 15:02:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.715 15:02:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.715 15:02:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.715 15:02:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.715 15:02:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.715 15:02:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.715 15:02:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.715 15:02:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.715 15:02:28 -- paths/export.sh@5 -- # export PATH 00:03:59.715 15:02:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.715 15:02:28 -- nvmf/common.sh@46 -- # : 0 00:03:59.715 15:02:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:59.715 15:02:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:59.715 15:02:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:59.715 15:02:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.715 15:02:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.715 15:02:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:59.715 15:02:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:59.715 15:02:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:59.715 15:02:28 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:59.715 INFO: launching applications... 00:03:59.715 Waiting for target to run... 00:03:59.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54423 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54423 /var/tmp/spdk_tgt.sock 00:03:59.716 15:02:28 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:59.716 15:02:28 -- common/autotest_common.sh@829 -- # '[' -z 54423 ']' 00:03:59.716 15:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:59.716 15:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.716 15:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:59.716 15:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.716 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.975 [2024-11-06 15:02:29.035854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:59.975 [2024-11-06 15:02:29.036167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54423 ] 00:04:00.234 [2024-11-06 15:02:29.328780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.234 [2024-11-06 15:02:29.365365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:00.234 [2024-11-06 15:02:29.365762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.803 15:02:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.803 15:02:30 -- common/autotest_common.sh@862 -- # return 0 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:00.803 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:00.803 INFO: shutting down applications... 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54423 ]] 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54423 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54423 00:04:00.803 15:02:30 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54423 00:04:01.372 SPDK target shutdown done 00:04:01.372 Success 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:01.372 15:02:30 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:01.372 00:04:01.372 real 0m1.744s 00:04:01.372 user 0m1.599s 00:04:01.372 sys 0m0.325s 00:04:01.372 ************************************ 00:04:01.372 END TEST json_config_extra_key 00:04:01.372 ************************************ 00:04:01.372 15:02:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.372 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:01.372 15:02:30 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:01.372 15:02:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.372 15:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.372 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:01.372 ************************************ 00:04:01.372 START TEST alias_rpc 00:04:01.372 ************************************ 00:04:01.372 15:02:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:01.372 * Looking for test storage... 00:04:01.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:01.632 15:02:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:01.632 15:02:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:01.632 15:02:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:01.632 15:02:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:01.632 15:02:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:01.632 15:02:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:01.632 15:02:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:01.632 15:02:30 -- scripts/common.sh@335 -- # IFS=.-: 00:04:01.632 15:02:30 -- scripts/common.sh@335 -- # read -ra ver1 00:04:01.632 15:02:30 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.632 15:02:30 -- scripts/common.sh@336 -- # read -ra ver2 00:04:01.632 15:02:30 -- scripts/common.sh@337 -- # local 'op=<' 00:04:01.632 15:02:30 -- scripts/common.sh@339 -- # ver1_l=2 00:04:01.632 15:02:30 -- scripts/common.sh@340 -- # ver2_l=1 00:04:01.632 15:02:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:01.632 15:02:30 -- scripts/common.sh@343 -- # case "$op" in 00:04:01.632 15:02:30 -- scripts/common.sh@344 -- # : 1 00:04:01.632 15:02:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:01.632 15:02:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.632 15:02:30 -- scripts/common.sh@364 -- # decimal 1 00:04:01.632 15:02:30 -- scripts/common.sh@352 -- # local d=1 00:04:01.632 15:02:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.632 15:02:30 -- scripts/common.sh@354 -- # echo 1 00:04:01.632 15:02:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:01.632 15:02:30 -- scripts/common.sh@365 -- # decimal 2 00:04:01.632 15:02:30 -- scripts/common.sh@352 -- # local d=2 00:04:01.632 15:02:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.632 15:02:30 -- scripts/common.sh@354 -- # echo 2 00:04:01.632 15:02:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:01.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.632 15:02:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:01.632 15:02:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:01.632 15:02:30 -- scripts/common.sh@367 -- # return 0 00:04:01.632 15:02:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.632 15:02:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:01.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.632 --rc genhtml_branch_coverage=1 00:04:01.633 --rc genhtml_function_coverage=1 00:04:01.633 --rc genhtml_legend=1 00:04:01.633 --rc geninfo_all_blocks=1 00:04:01.633 --rc geninfo_unexecuted_blocks=1 00:04:01.633 00:04:01.633 ' 00:04:01.633 15:02:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:01.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.633 --rc genhtml_branch_coverage=1 00:04:01.633 --rc genhtml_function_coverage=1 00:04:01.633 --rc genhtml_legend=1 00:04:01.633 --rc geninfo_all_blocks=1 00:04:01.633 --rc geninfo_unexecuted_blocks=1 00:04:01.633 00:04:01.633 ' 00:04:01.633 15:02:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:01.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.633 --rc genhtml_branch_coverage=1 00:04:01.633 --rc genhtml_function_coverage=1 00:04:01.633 --rc genhtml_legend=1 00:04:01.633 --rc geninfo_all_blocks=1 00:04:01.633 --rc geninfo_unexecuted_blocks=1 00:04:01.633 00:04:01.633 ' 00:04:01.633 15:02:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:01.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.633 --rc genhtml_branch_coverage=1 00:04:01.633 --rc genhtml_function_coverage=1 00:04:01.633 --rc genhtml_legend=1 00:04:01.633 --rc geninfo_all_blocks=1 00:04:01.633 --rc geninfo_unexecuted_blocks=1 00:04:01.633 00:04:01.633 ' 00:04:01.633 15:02:30 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:01.633 15:02:30 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54489 00:04:01.633 15:02:30 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54489 00:04:01.633 15:02:30 -- common/autotest_common.sh@829 -- # '[' -z 54489 ']' 00:04:01.633 15:02:30 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:01.633 15:02:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.633 15:02:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.633 15:02:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.633 15:02:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.633 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:01.633 [2024-11-06 15:02:30.827290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:01.633 [2024-11-06 15:02:30.827559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54489 ] 00:04:01.892 [2024-11-06 15:02:30.960305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.892 [2024-11-06 15:02:31.008998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:01.892 [2024-11-06 15:02:31.009414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.831 15:02:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:02.831 15:02:31 -- common/autotest_common.sh@862 -- # return 0 00:04:02.831 15:02:31 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:02.831 15:02:32 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54489 00:04:02.831 15:02:32 -- common/autotest_common.sh@936 -- # '[' -z 54489 ']' 00:04:02.831 15:02:32 -- common/autotest_common.sh@940 -- # kill -0 54489 00:04:02.831 15:02:32 -- common/autotest_common.sh@941 -- # uname 00:04:02.831 15:02:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:02.831 15:02:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54489 00:04:03.090 killing process with pid 54489 00:04:03.090 15:02:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:03.091 15:02:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:03.091 15:02:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54489' 00:04:03.091 15:02:32 -- common/autotest_common.sh@955 -- # kill 54489 00:04:03.091 15:02:32 -- common/autotest_common.sh@960 -- # wait 54489 00:04:03.091 00:04:03.091 real 0m1.789s 00:04:03.091 user 0m2.146s 00:04:03.091 sys 0m0.332s 00:04:03.350 ************************************ 00:04:03.350 END TEST alias_rpc 00:04:03.350 ************************************ 00:04:03.350 15:02:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.350 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.350 15:02:32 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:03.350 15:02:32 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:03.350 15:02:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.350 15:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.350 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.350 ************************************ 00:04:03.350 START TEST spdkcli_tcp 00:04:03.350 ************************************ 00:04:03.350 15:02:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:03.350 * Looking for test storage... 00:04:03.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:03.350 15:02:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:03.350 15:02:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:03.350 15:02:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:03.350 15:02:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:03.350 15:02:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:03.350 15:02:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:03.350 15:02:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:03.350 15:02:32 -- scripts/common.sh@335 -- # IFS=.-: 00:04:03.350 15:02:32 -- scripts/common.sh@335 -- # read -ra ver1 00:04:03.350 15:02:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.350 15:02:32 -- scripts/common.sh@336 -- # read -ra ver2 00:04:03.350 15:02:32 -- scripts/common.sh@337 -- # local 'op=<' 00:04:03.350 15:02:32 -- scripts/common.sh@339 -- # ver1_l=2 00:04:03.350 15:02:32 -- scripts/common.sh@340 -- # ver2_l=1 00:04:03.350 15:02:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:03.350 15:02:32 -- scripts/common.sh@343 -- # case "$op" in 00:04:03.350 15:02:32 -- scripts/common.sh@344 -- # : 1 00:04:03.350 15:02:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:03.350 15:02:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.350 15:02:32 -- scripts/common.sh@364 -- # decimal 1 00:04:03.350 15:02:32 -- scripts/common.sh@352 -- # local d=1 00:04:03.350 15:02:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.350 15:02:32 -- scripts/common.sh@354 -- # echo 1 00:04:03.350 15:02:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:03.350 15:02:32 -- scripts/common.sh@365 -- # decimal 2 00:04:03.350 15:02:32 -- scripts/common.sh@352 -- # local d=2 00:04:03.350 15:02:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.350 15:02:32 -- scripts/common.sh@354 -- # echo 2 00:04:03.350 15:02:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:03.350 15:02:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:03.350 15:02:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:03.350 15:02:32 -- scripts/common.sh@367 -- # return 0 00:04:03.350 15:02:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.350 15:02:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:03.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.350 --rc genhtml_branch_coverage=1 00:04:03.350 --rc genhtml_function_coverage=1 00:04:03.350 --rc genhtml_legend=1 00:04:03.350 --rc geninfo_all_blocks=1 00:04:03.350 --rc geninfo_unexecuted_blocks=1 00:04:03.350 00:04:03.350 ' 00:04:03.350 15:02:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:03.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.350 --rc genhtml_branch_coverage=1 00:04:03.350 --rc genhtml_function_coverage=1 00:04:03.350 --rc genhtml_legend=1 00:04:03.350 --rc geninfo_all_blocks=1 00:04:03.350 --rc geninfo_unexecuted_blocks=1 00:04:03.350 00:04:03.350 ' 00:04:03.350 15:02:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:03.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.350 --rc genhtml_branch_coverage=1 00:04:03.350 --rc genhtml_function_coverage=1 00:04:03.350 --rc genhtml_legend=1 00:04:03.350 --rc geninfo_all_blocks=1 00:04:03.350 --rc geninfo_unexecuted_blocks=1 00:04:03.350 00:04:03.350 ' 00:04:03.350 15:02:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:03.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.350 --rc genhtml_branch_coverage=1 00:04:03.350 --rc genhtml_function_coverage=1 00:04:03.350 --rc genhtml_legend=1 00:04:03.350 --rc geninfo_all_blocks=1 00:04:03.350 --rc geninfo_unexecuted_blocks=1 00:04:03.350 00:04:03.350 ' 00:04:03.350 15:02:32 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:03.350 15:02:32 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:03.350 15:02:32 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:03.350 15:02:32 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:03.350 15:02:32 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:03.350 15:02:32 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:03.350 15:02:32 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:03.350 15:02:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.350 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.351 15:02:32 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54572 00:04:03.351 15:02:32 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:03.351 15:02:32 -- spdkcli/tcp.sh@27 -- # waitforlisten 54572 00:04:03.351 15:02:32 -- common/autotest_common.sh@829 -- # '[' -z 54572 ']' 00:04:03.351 15:02:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.351 15:02:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:03.351 15:02:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.351 15:02:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:03.351 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.610 [2024-11-06 15:02:32.665105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:03.610 [2024-11-06 15:02:32.665406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54572 ] 00:04:03.610 [2024-11-06 15:02:32.796874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.610 [2024-11-06 15:02:32.849288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.610 [2024-11-06 15:02:32.849732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.610 [2024-11-06 15:02:32.849739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.550 15:02:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:04.550 15:02:33 -- common/autotest_common.sh@862 -- # return 0 00:04:04.550 15:02:33 -- spdkcli/tcp.sh@31 -- # socat_pid=54589 00:04:04.550 15:02:33 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:04.550 15:02:33 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:04.550 [ 00:04:04.550 "bdev_malloc_delete", 00:04:04.550 "bdev_malloc_create", 00:04:04.550 "bdev_null_resize", 00:04:04.550 "bdev_null_delete", 00:04:04.550 "bdev_null_create", 00:04:04.550 "bdev_nvme_cuse_unregister", 00:04:04.550 "bdev_nvme_cuse_register", 00:04:04.550 "bdev_opal_new_user", 00:04:04.550 "bdev_opal_set_lock_state", 00:04:04.550 "bdev_opal_delete", 00:04:04.550 "bdev_opal_get_info", 00:04:04.550 "bdev_opal_create", 00:04:04.550 "bdev_nvme_opal_revert", 00:04:04.550 "bdev_nvme_opal_init", 00:04:04.550 "bdev_nvme_send_cmd", 00:04:04.550 "bdev_nvme_get_path_iostat", 00:04:04.550 "bdev_nvme_get_mdns_discovery_info", 00:04:04.550 "bdev_nvme_stop_mdns_discovery", 00:04:04.550 "bdev_nvme_start_mdns_discovery", 00:04:04.550 "bdev_nvme_set_multipath_policy", 00:04:04.550 "bdev_nvme_set_preferred_path", 00:04:04.550 "bdev_nvme_get_io_paths", 00:04:04.550 "bdev_nvme_remove_error_injection", 00:04:04.550 "bdev_nvme_add_error_injection", 00:04:04.550 "bdev_nvme_get_discovery_info", 00:04:04.550 "bdev_nvme_stop_discovery", 00:04:04.550 "bdev_nvme_start_discovery", 00:04:04.550 "bdev_nvme_get_controller_health_info", 00:04:04.550 "bdev_nvme_disable_controller", 00:04:04.550 "bdev_nvme_enable_controller", 00:04:04.550 "bdev_nvme_reset_controller", 00:04:04.550 "bdev_nvme_get_transport_statistics", 00:04:04.550 "bdev_nvme_apply_firmware", 00:04:04.550 "bdev_nvme_detach_controller", 00:04:04.550 "bdev_nvme_get_controllers", 00:04:04.550 "bdev_nvme_attach_controller", 00:04:04.550 "bdev_nvme_set_hotplug", 00:04:04.550 "bdev_nvme_set_options", 00:04:04.550 "bdev_passthru_delete", 00:04:04.550 "bdev_passthru_create", 00:04:04.550 "bdev_lvol_grow_lvstore", 00:04:04.550 "bdev_lvol_get_lvols", 00:04:04.550 "bdev_lvol_get_lvstores", 00:04:04.550 "bdev_lvol_delete", 00:04:04.550 "bdev_lvol_set_read_only", 00:04:04.550 "bdev_lvol_resize", 00:04:04.550 "bdev_lvol_decouple_parent", 00:04:04.550 "bdev_lvol_inflate", 00:04:04.550 "bdev_lvol_rename", 00:04:04.550 "bdev_lvol_clone_bdev", 00:04:04.550 "bdev_lvol_clone", 00:04:04.550 "bdev_lvol_snapshot", 00:04:04.550 "bdev_lvol_create", 00:04:04.550 "bdev_lvol_delete_lvstore", 00:04:04.550 "bdev_lvol_rename_lvstore", 00:04:04.550 "bdev_lvol_create_lvstore", 00:04:04.550 "bdev_raid_set_options", 00:04:04.550 "bdev_raid_remove_base_bdev", 00:04:04.550 "bdev_raid_add_base_bdev", 00:04:04.550 "bdev_raid_delete", 00:04:04.550 "bdev_raid_create", 00:04:04.550 "bdev_raid_get_bdevs", 00:04:04.550 "bdev_error_inject_error", 00:04:04.550 "bdev_error_delete", 00:04:04.550 "bdev_error_create", 00:04:04.550 "bdev_split_delete", 00:04:04.550 "bdev_split_create", 00:04:04.550 "bdev_delay_delete", 00:04:04.550 "bdev_delay_create", 00:04:04.550 "bdev_delay_update_latency", 00:04:04.550 "bdev_zone_block_delete", 00:04:04.550 "bdev_zone_block_create", 00:04:04.550 "blobfs_create", 00:04:04.550 "blobfs_detect", 00:04:04.550 "blobfs_set_cache_size", 00:04:04.550 "bdev_aio_delete", 00:04:04.550 "bdev_aio_rescan", 00:04:04.550 "bdev_aio_create", 00:04:04.550 "bdev_ftl_set_property", 00:04:04.550 "bdev_ftl_get_properties", 00:04:04.550 "bdev_ftl_get_stats", 00:04:04.550 "bdev_ftl_unmap", 00:04:04.550 "bdev_ftl_unload", 00:04:04.550 "bdev_ftl_delete", 00:04:04.550 "bdev_ftl_load", 00:04:04.550 "bdev_ftl_create", 00:04:04.550 "bdev_virtio_attach_controller", 00:04:04.550 "bdev_virtio_scsi_get_devices", 00:04:04.550 "bdev_virtio_detach_controller", 00:04:04.550 "bdev_virtio_blk_set_hotplug", 00:04:04.550 "bdev_iscsi_delete", 00:04:04.550 "bdev_iscsi_create", 00:04:04.550 "bdev_iscsi_set_options", 00:04:04.550 "bdev_uring_delete", 00:04:04.550 "bdev_uring_create", 00:04:04.550 "accel_error_inject_error", 00:04:04.550 "ioat_scan_accel_module", 00:04:04.550 "dsa_scan_accel_module", 00:04:04.550 "iaa_scan_accel_module", 00:04:04.550 "vfu_virtio_create_scsi_endpoint", 00:04:04.550 "vfu_virtio_scsi_remove_target", 00:04:04.550 "vfu_virtio_scsi_add_target", 00:04:04.550 "vfu_virtio_create_blk_endpoint", 00:04:04.550 "vfu_virtio_delete_endpoint", 00:04:04.550 "iscsi_set_options", 00:04:04.550 "iscsi_get_auth_groups", 00:04:04.550 "iscsi_auth_group_remove_secret", 00:04:04.550 "iscsi_auth_group_add_secret", 00:04:04.550 "iscsi_delete_auth_group", 00:04:04.550 "iscsi_create_auth_group", 00:04:04.550 "iscsi_set_discovery_auth", 00:04:04.550 "iscsi_get_options", 00:04:04.550 "iscsi_target_node_request_logout", 00:04:04.550 "iscsi_target_node_set_redirect", 00:04:04.550 "iscsi_target_node_set_auth", 00:04:04.550 "iscsi_target_node_add_lun", 00:04:04.550 "iscsi_get_connections", 00:04:04.550 "iscsi_portal_group_set_auth", 00:04:04.550 "iscsi_start_portal_group", 00:04:04.550 "iscsi_delete_portal_group", 00:04:04.550 "iscsi_create_portal_group", 00:04:04.550 "iscsi_get_portal_groups", 00:04:04.550 "iscsi_delete_target_node", 00:04:04.550 "iscsi_target_node_remove_pg_ig_maps", 00:04:04.550 "iscsi_target_node_add_pg_ig_maps", 00:04:04.550 "iscsi_create_target_node", 00:04:04.550 "iscsi_get_target_nodes", 00:04:04.550 "iscsi_delete_initiator_group", 00:04:04.550 "iscsi_initiator_group_remove_initiators", 00:04:04.550 "iscsi_initiator_group_add_initiators", 00:04:04.550 "iscsi_create_initiator_group", 00:04:04.550 "iscsi_get_initiator_groups", 00:04:04.550 "nvmf_set_crdt", 00:04:04.550 "nvmf_set_config", 00:04:04.550 "nvmf_set_max_subsystems", 00:04:04.550 "nvmf_subsystem_get_listeners", 00:04:04.550 "nvmf_subsystem_get_qpairs", 00:04:04.550 "nvmf_subsystem_get_controllers", 00:04:04.550 "nvmf_get_stats", 00:04:04.550 "nvmf_get_transports", 00:04:04.550 "nvmf_create_transport", 00:04:04.550 "nvmf_get_targets", 00:04:04.550 "nvmf_delete_target", 00:04:04.551 "nvmf_create_target", 00:04:04.551 "nvmf_subsystem_allow_any_host", 00:04:04.551 "nvmf_subsystem_remove_host", 00:04:04.551 "nvmf_subsystem_add_host", 00:04:04.551 "nvmf_subsystem_remove_ns", 00:04:04.551 "nvmf_subsystem_add_ns", 00:04:04.551 "nvmf_subsystem_listener_set_ana_state", 00:04:04.551 "nvmf_discovery_get_referrals", 00:04:04.551 "nvmf_discovery_remove_referral", 00:04:04.551 "nvmf_discovery_add_referral", 00:04:04.551 "nvmf_subsystem_remove_listener", 00:04:04.551 "nvmf_subsystem_add_listener", 00:04:04.551 "nvmf_delete_subsystem", 00:04:04.551 "nvmf_create_subsystem", 00:04:04.551 "nvmf_get_subsystems", 00:04:04.551 "env_dpdk_get_mem_stats", 00:04:04.551 "nbd_get_disks", 00:04:04.551 "nbd_stop_disk", 00:04:04.551 "nbd_start_disk", 00:04:04.551 "ublk_recover_disk", 00:04:04.551 "ublk_get_disks", 00:04:04.551 "ublk_stop_disk", 00:04:04.551 "ublk_start_disk", 00:04:04.551 "ublk_destroy_target", 00:04:04.551 "ublk_create_target", 00:04:04.551 "virtio_blk_create_transport", 00:04:04.551 "virtio_blk_get_transports", 00:04:04.551 "vhost_controller_set_coalescing", 00:04:04.551 "vhost_get_controllers", 00:04:04.551 "vhost_delete_controller", 00:04:04.551 "vhost_create_blk_controller", 00:04:04.551 "vhost_scsi_controller_remove_target", 00:04:04.551 "vhost_scsi_controller_add_target", 00:04:04.551 "vhost_start_scsi_controller", 00:04:04.551 "vhost_create_scsi_controller", 00:04:04.551 "thread_set_cpumask", 00:04:04.551 "framework_get_scheduler", 00:04:04.551 "framework_set_scheduler", 00:04:04.551 "framework_get_reactors", 00:04:04.551 "thread_get_io_channels", 00:04:04.551 "thread_get_pollers", 00:04:04.551 "thread_get_stats", 00:04:04.551 "framework_monitor_context_switch", 00:04:04.551 "spdk_kill_instance", 00:04:04.551 "log_enable_timestamps", 00:04:04.551 "log_get_flags", 00:04:04.551 "log_clear_flag", 00:04:04.551 "log_set_flag", 00:04:04.551 "log_get_level", 00:04:04.551 "log_set_level", 00:04:04.551 "log_get_print_level", 00:04:04.551 "log_set_print_level", 00:04:04.551 "framework_enable_cpumask_locks", 00:04:04.551 "framework_disable_cpumask_locks", 00:04:04.551 "framework_wait_init", 00:04:04.551 "framework_start_init", 00:04:04.551 "scsi_get_devices", 00:04:04.551 "bdev_get_histogram", 00:04:04.551 "bdev_enable_histogram", 00:04:04.551 "bdev_set_qos_limit", 00:04:04.551 "bdev_set_qd_sampling_period", 00:04:04.551 "bdev_get_bdevs", 00:04:04.551 "bdev_reset_iostat", 00:04:04.551 "bdev_get_iostat", 00:04:04.551 "bdev_examine", 00:04:04.551 "bdev_wait_for_examine", 00:04:04.551 "bdev_set_options", 00:04:04.551 "notify_get_notifications", 00:04:04.551 "notify_get_types", 00:04:04.551 "accel_get_stats", 00:04:04.551 "accel_set_options", 00:04:04.551 "accel_set_driver", 00:04:04.551 "accel_crypto_key_destroy", 00:04:04.551 "accel_crypto_keys_get", 00:04:04.551 "accel_crypto_key_create", 00:04:04.551 "accel_assign_opc", 00:04:04.551 "accel_get_module_info", 00:04:04.551 "accel_get_opc_assignments", 00:04:04.551 "vmd_rescan", 00:04:04.551 "vmd_remove_device", 00:04:04.551 "vmd_enable", 00:04:04.551 "sock_set_default_impl", 00:04:04.551 "sock_impl_set_options", 00:04:04.551 "sock_impl_get_options", 00:04:04.551 "iobuf_get_stats", 00:04:04.551 "iobuf_set_options", 00:04:04.551 "framework_get_pci_devices", 00:04:04.551 "framework_get_config", 00:04:04.551 "framework_get_subsystems", 00:04:04.551 "vfu_tgt_set_base_path", 00:04:04.551 "trace_get_info", 00:04:04.551 "trace_get_tpoint_group_mask", 00:04:04.551 "trace_disable_tpoint_group", 00:04:04.551 "trace_enable_tpoint_group", 00:04:04.551 "trace_clear_tpoint_mask", 00:04:04.551 "trace_set_tpoint_mask", 00:04:04.551 "spdk_get_version", 00:04:04.551 "rpc_get_methods" 00:04:04.551 ] 00:04:04.551 15:02:33 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:04.551 15:02:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:04.551 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:04:04.811 15:02:33 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:04.811 15:02:33 -- spdkcli/tcp.sh@38 -- # killprocess 54572 00:04:04.811 15:02:33 -- common/autotest_common.sh@936 -- # '[' -z 54572 ']' 00:04:04.811 15:02:33 -- common/autotest_common.sh@940 -- # kill -0 54572 00:04:04.811 15:02:33 -- common/autotest_common.sh@941 -- # uname 00:04:04.811 15:02:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:04.811 15:02:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54572 00:04:04.811 killing process with pid 54572 00:04:04.811 15:02:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:04.811 15:02:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:04.811 15:02:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54572' 00:04:04.811 15:02:33 -- common/autotest_common.sh@955 -- # kill 54572 00:04:04.811 15:02:33 -- common/autotest_common.sh@960 -- # wait 54572 00:04:05.070 ************************************ 00:04:05.070 END TEST spdkcli_tcp 00:04:05.070 ************************************ 00:04:05.070 00:04:05.070 real 0m1.731s 00:04:05.070 user 0m3.215s 00:04:05.070 sys 0m0.357s 00:04:05.070 15:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.070 15:02:34 -- common/autotest_common.sh@10 -- # set +x 00:04:05.070 15:02:34 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.070 15:02:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.070 15:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.070 15:02:34 -- common/autotest_common.sh@10 -- # set +x 00:04:05.070 ************************************ 00:04:05.070 START TEST dpdk_mem_utility 00:04:05.070 ************************************ 00:04:05.070 15:02:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.070 * Looking for test storage... 00:04:05.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:05.070 15:02:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:05.070 15:02:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:05.070 15:02:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:05.330 15:02:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:05.330 15:02:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:05.330 15:02:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:05.330 15:02:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:05.330 15:02:34 -- scripts/common.sh@335 -- # IFS=.-: 00:04:05.330 15:02:34 -- scripts/common.sh@335 -- # read -ra ver1 00:04:05.330 15:02:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.330 15:02:34 -- scripts/common.sh@336 -- # read -ra ver2 00:04:05.330 15:02:34 -- scripts/common.sh@337 -- # local 'op=<' 00:04:05.330 15:02:34 -- scripts/common.sh@339 -- # ver1_l=2 00:04:05.330 15:02:34 -- scripts/common.sh@340 -- # ver2_l=1 00:04:05.330 15:02:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:05.330 15:02:34 -- scripts/common.sh@343 -- # case "$op" in 00:04:05.330 15:02:34 -- scripts/common.sh@344 -- # : 1 00:04:05.330 15:02:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:05.330 15:02:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.330 15:02:34 -- scripts/common.sh@364 -- # decimal 1 00:04:05.330 15:02:34 -- scripts/common.sh@352 -- # local d=1 00:04:05.330 15:02:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.330 15:02:34 -- scripts/common.sh@354 -- # echo 1 00:04:05.330 15:02:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:05.330 15:02:34 -- scripts/common.sh@365 -- # decimal 2 00:04:05.330 15:02:34 -- scripts/common.sh@352 -- # local d=2 00:04:05.330 15:02:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.331 15:02:34 -- scripts/common.sh@354 -- # echo 2 00:04:05.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.331 15:02:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:05.331 15:02:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:05.331 15:02:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:05.331 15:02:34 -- scripts/common.sh@367 -- # return 0 00:04:05.331 15:02:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.331 15:02:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:05.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.331 --rc genhtml_branch_coverage=1 00:04:05.331 --rc genhtml_function_coverage=1 00:04:05.331 --rc genhtml_legend=1 00:04:05.331 --rc geninfo_all_blocks=1 00:04:05.331 --rc geninfo_unexecuted_blocks=1 00:04:05.331 00:04:05.331 ' 00:04:05.331 15:02:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:05.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.331 --rc genhtml_branch_coverage=1 00:04:05.331 --rc genhtml_function_coverage=1 00:04:05.331 --rc genhtml_legend=1 00:04:05.331 --rc geninfo_all_blocks=1 00:04:05.331 --rc geninfo_unexecuted_blocks=1 00:04:05.331 00:04:05.331 ' 00:04:05.331 15:02:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:05.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.331 --rc genhtml_branch_coverage=1 00:04:05.331 --rc genhtml_function_coverage=1 00:04:05.331 --rc genhtml_legend=1 00:04:05.331 --rc geninfo_all_blocks=1 00:04:05.331 --rc geninfo_unexecuted_blocks=1 00:04:05.331 00:04:05.331 ' 00:04:05.331 15:02:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:05.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.331 --rc genhtml_branch_coverage=1 00:04:05.331 --rc genhtml_function_coverage=1 00:04:05.331 --rc genhtml_legend=1 00:04:05.331 --rc geninfo_all_blocks=1 00:04:05.331 --rc geninfo_unexecuted_blocks=1 00:04:05.331 00:04:05.331 ' 00:04:05.331 15:02:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:05.331 15:02:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54670 00:04:05.331 15:02:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54670 00:04:05.331 15:02:34 -- common/autotest_common.sh@829 -- # '[' -z 54670 ']' 00:04:05.331 15:02:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.331 15:02:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.331 15:02:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.331 15:02:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.331 15:02:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.331 15:02:34 -- common/autotest_common.sh@10 -- # set +x 00:04:05.331 [2024-11-06 15:02:34.431621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:05.331 [2024-11-06 15:02:34.431741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54670 ] 00:04:05.331 [2024-11-06 15:02:34.568658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.590 [2024-11-06 15:02:34.618723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:05.590 [2024-11-06 15:02:34.618884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.159 15:02:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.159 15:02:35 -- common/autotest_common.sh@862 -- # return 0 00:04:06.159 15:02:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:06.159 15:02:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:06.159 15:02:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.159 15:02:35 -- common/autotest_common.sh@10 -- # set +x 00:04:06.421 { 00:04:06.421 "filename": "/tmp/spdk_mem_dump.txt" 00:04:06.421 } 00:04:06.421 15:02:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.421 15:02:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:06.421 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:06.421 1 heaps totaling size 814.000000 MiB 00:04:06.421 size: 814.000000 MiB heap id: 0 00:04:06.421 end heaps---------- 00:04:06.421 8 mempools totaling size 598.116089 MiB 00:04:06.421 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:06.421 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:06.421 size: 84.521057 MiB name: bdev_io_54670 00:04:06.421 size: 51.011292 MiB name: evtpool_54670 00:04:06.421 size: 50.003479 MiB name: msgpool_54670 00:04:06.421 size: 21.763794 MiB name: PDU_Pool 00:04:06.421 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:06.421 size: 0.026123 MiB name: Session_Pool 00:04:06.421 end mempools------- 00:04:06.421 6 memzones totaling size 4.142822 MiB 00:04:06.421 size: 1.000366 MiB name: RG_ring_0_54670 00:04:06.421 size: 1.000366 MiB name: RG_ring_1_54670 00:04:06.421 size: 1.000366 MiB name: RG_ring_4_54670 00:04:06.421 size: 1.000366 MiB name: RG_ring_5_54670 00:04:06.421 size: 0.125366 MiB name: RG_ring_2_54670 00:04:06.421 size: 0.015991 MiB name: RG_ring_3_54670 00:04:06.421 end memzones------- 00:04:06.421 15:02:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:06.421 heap id: 0 total size: 814.000000 MiB number of busy elements: 295 number of free elements: 15 00:04:06.421 list of free elements. size: 12.472839 MiB 00:04:06.421 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:06.421 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:06.421 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:06.421 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:06.421 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:06.421 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:06.421 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:06.421 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:06.421 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:06.421 element at address: 0x20001aa00000 with size: 0.570435 MiB 00:04:06.421 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:06.421 element at address: 0x200000800000 with size: 0.486328 MiB 00:04:06.421 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:06.421 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:06.421 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:06.421 list of standard malloc elements. size: 199.264587 MiB 00:04:06.421 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:06.421 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:06.421 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:06.421 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:06.421 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:06.421 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:06.421 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:06.421 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:06.421 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:06.421 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:06.421 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:06.421 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:06.422 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:06.422 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:06.423 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:06.423 list of memzone associated elements. size: 602.262573 MiB 00:04:06.423 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:06.423 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:06.423 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:06.423 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:06.423 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:06.423 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54670_0 00:04:06.423 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:06.423 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54670_0 00:04:06.423 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:06.423 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54670_0 00:04:06.423 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:06.423 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:06.423 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:06.423 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:06.423 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:06.423 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54670 00:04:06.423 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:06.423 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54670 00:04:06.423 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:06.423 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54670 00:04:06.423 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:06.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:06.423 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:06.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:06.423 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:06.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:06.423 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:06.423 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:06.423 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:06.423 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54670 00:04:06.423 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:06.423 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54670 00:04:06.423 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:06.423 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54670 00:04:06.423 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:06.423 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54670 00:04:06.423 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:06.423 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54670 00:04:06.423 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:06.423 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:06.423 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:06.423 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:06.423 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:06.423 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:06.423 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:06.423 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54670 00:04:06.423 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:06.423 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:06.423 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:06.423 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:06.423 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:06.423 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54670 00:04:06.423 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:06.423 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:06.423 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:06.423 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54670 00:04:06.423 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:06.423 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54670 00:04:06.423 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:06.423 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:06.423 15:02:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:06.423 15:02:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54670 00:04:06.423 15:02:35 -- common/autotest_common.sh@936 -- # '[' -z 54670 ']' 00:04:06.423 15:02:35 -- common/autotest_common.sh@940 -- # kill -0 54670 00:04:06.423 15:02:35 -- common/autotest_common.sh@941 -- # uname 00:04:06.423 15:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:06.423 15:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54670 00:04:06.423 killing process with pid 54670 00:04:06.423 15:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:06.423 15:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:06.423 15:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54670' 00:04:06.423 15:02:35 -- common/autotest_common.sh@955 -- # kill 54670 00:04:06.423 15:02:35 -- common/autotest_common.sh@960 -- # wait 54670 00:04:06.683 ************************************ 00:04:06.683 END TEST dpdk_mem_utility 00:04:06.683 ************************************ 00:04:06.683 00:04:06.683 real 0m1.664s 00:04:06.683 user 0m1.957s 00:04:06.683 sys 0m0.318s 00:04:06.683 15:02:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.683 15:02:35 -- common/autotest_common.sh@10 -- # set +x 00:04:06.683 15:02:35 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:06.683 15:02:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.683 15:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.683 15:02:35 -- common/autotest_common.sh@10 -- # set +x 00:04:06.683 ************************************ 00:04:06.683 START TEST event 00:04:06.683 ************************************ 00:04:06.683 15:02:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:06.942 * Looking for test storage... 00:04:06.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:06.942 15:02:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:06.942 15:02:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:06.942 15:02:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:06.942 15:02:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:06.942 15:02:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:06.942 15:02:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:06.942 15:02:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:06.942 15:02:36 -- scripts/common.sh@335 -- # IFS=.-: 00:04:06.942 15:02:36 -- scripts/common.sh@335 -- # read -ra ver1 00:04:06.942 15:02:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.942 15:02:36 -- scripts/common.sh@336 -- # read -ra ver2 00:04:06.942 15:02:36 -- scripts/common.sh@337 -- # local 'op=<' 00:04:06.942 15:02:36 -- scripts/common.sh@339 -- # ver1_l=2 00:04:06.942 15:02:36 -- scripts/common.sh@340 -- # ver2_l=1 00:04:06.942 15:02:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:06.942 15:02:36 -- scripts/common.sh@343 -- # case "$op" in 00:04:06.942 15:02:36 -- scripts/common.sh@344 -- # : 1 00:04:06.942 15:02:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:06.942 15:02:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.942 15:02:36 -- scripts/common.sh@364 -- # decimal 1 00:04:06.942 15:02:36 -- scripts/common.sh@352 -- # local d=1 00:04:06.942 15:02:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.942 15:02:36 -- scripts/common.sh@354 -- # echo 1 00:04:06.942 15:02:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:06.942 15:02:36 -- scripts/common.sh@365 -- # decimal 2 00:04:06.942 15:02:36 -- scripts/common.sh@352 -- # local d=2 00:04:06.942 15:02:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.942 15:02:36 -- scripts/common.sh@354 -- # echo 2 00:04:06.942 15:02:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:06.942 15:02:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:06.942 15:02:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:06.942 15:02:36 -- scripts/common.sh@367 -- # return 0 00:04:06.942 15:02:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.942 15:02:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.942 --rc genhtml_branch_coverage=1 00:04:06.942 --rc genhtml_function_coverage=1 00:04:06.942 --rc genhtml_legend=1 00:04:06.942 --rc geninfo_all_blocks=1 00:04:06.942 --rc geninfo_unexecuted_blocks=1 00:04:06.942 00:04:06.942 ' 00:04:06.942 15:02:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.942 --rc genhtml_branch_coverage=1 00:04:06.942 --rc genhtml_function_coverage=1 00:04:06.942 --rc genhtml_legend=1 00:04:06.942 --rc geninfo_all_blocks=1 00:04:06.942 --rc geninfo_unexecuted_blocks=1 00:04:06.942 00:04:06.942 ' 00:04:06.942 15:02:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.942 --rc genhtml_branch_coverage=1 00:04:06.942 --rc genhtml_function_coverage=1 00:04:06.942 --rc genhtml_legend=1 00:04:06.942 --rc geninfo_all_blocks=1 00:04:06.942 --rc geninfo_unexecuted_blocks=1 00:04:06.942 00:04:06.942 ' 00:04:06.942 15:02:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.942 --rc genhtml_branch_coverage=1 00:04:06.942 --rc genhtml_function_coverage=1 00:04:06.942 --rc genhtml_legend=1 00:04:06.942 --rc geninfo_all_blocks=1 00:04:06.942 --rc geninfo_unexecuted_blocks=1 00:04:06.942 00:04:06.942 ' 00:04:06.942 15:02:36 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:06.942 15:02:36 -- bdev/nbd_common.sh@6 -- # set -e 00:04:06.942 15:02:36 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.942 15:02:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:06.942 15:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.942 15:02:36 -- common/autotest_common.sh@10 -- # set +x 00:04:06.942 ************************************ 00:04:06.942 START TEST event_perf 00:04:06.942 ************************************ 00:04:06.942 15:02:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.942 Running I/O for 1 seconds...[2024-11-06 15:02:36.104081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:06.942 [2024-11-06 15:02:36.104332] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54754 ] 00:04:07.202 [2024-11-06 15:02:36.242043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.202 [2024-11-06 15:02:36.294016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.202 [2024-11-06 15:02:36.294147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.202 [2024-11-06 15:02:36.294254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.202 [2024-11-06 15:02:36.294256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.138 Running I/O for 1 seconds... 00:04:08.139 lcore 0: 201715 00:04:08.139 lcore 1: 201714 00:04:08.139 lcore 2: 201713 00:04:08.139 lcore 3: 201715 00:04:08.139 done. 00:04:08.139 00:04:08.139 real 0m1.296s 00:04:08.139 user 0m4.131s 00:04:08.139 sys 0m0.047s 00:04:08.139 15:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.139 ************************************ 00:04:08.139 END TEST event_perf 00:04:08.139 ************************************ 00:04:08.139 15:02:37 -- common/autotest_common.sh@10 -- # set +x 00:04:08.398 15:02:37 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:08.398 15:02:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:08.398 15:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.398 15:02:37 -- common/autotest_common.sh@10 -- # set +x 00:04:08.398 ************************************ 00:04:08.398 START TEST event_reactor 00:04:08.398 ************************************ 00:04:08.398 15:02:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:08.398 [2024-11-06 15:02:37.449845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:08.398 [2024-11-06 15:02:37.449935] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54787 ] 00:04:08.398 [2024-11-06 15:02:37.587933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.398 [2024-11-06 15:02:37.638392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.776 test_start 00:04:09.776 oneshot 00:04:09.776 tick 100 00:04:09.776 tick 100 00:04:09.776 tick 250 00:04:09.776 tick 100 00:04:09.776 tick 100 00:04:09.776 tick 250 00:04:09.776 tick 500 00:04:09.776 tick 100 00:04:09.776 tick 100 00:04:09.776 tick 100 00:04:09.776 tick 250 00:04:09.776 tick 100 00:04:09.776 tick 100 00:04:09.776 test_end 00:04:09.776 ************************************ 00:04:09.776 END TEST event_reactor 00:04:09.776 ************************************ 00:04:09.776 00:04:09.776 real 0m1.284s 00:04:09.776 user 0m1.137s 00:04:09.776 sys 0m0.040s 00:04:09.776 15:02:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.776 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:04:09.776 15:02:38 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.776 15:02:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:09.776 15:02:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.776 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:04:09.776 ************************************ 00:04:09.776 START TEST event_reactor_perf 00:04:09.776 ************************************ 00:04:09.776 15:02:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.776 [2024-11-06 15:02:38.788935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:09.776 [2024-11-06 15:02:38.789026] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54817 ] 00:04:09.776 [2024-11-06 15:02:38.924943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.776 [2024-11-06 15:02:38.975485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.220 test_start 00:04:11.220 test_end 00:04:11.220 Performance: 425777 events per second 00:04:11.220 ************************************ 00:04:11.220 END TEST event_reactor_perf 00:04:11.220 ************************************ 00:04:11.220 00:04:11.220 real 0m1.297s 00:04:11.220 user 0m1.156s 00:04:11.220 sys 0m0.034s 00:04:11.220 15:02:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.220 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.220 15:02:40 -- event/event.sh@49 -- # uname -s 00:04:11.220 15:02:40 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:11.220 15:02:40 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:11.220 15:02:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.220 15:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.220 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.220 ************************************ 00:04:11.220 START TEST event_scheduler 00:04:11.220 ************************************ 00:04:11.220 15:02:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:11.220 * Looking for test storage... 00:04:11.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:11.220 15:02:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.220 15:02:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.220 15:02:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.220 15:02:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.220 15:02:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.220 15:02:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.220 15:02:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.220 15:02:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.220 15:02:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.220 15:02:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.220 15:02:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.220 15:02:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.220 15:02:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.220 15:02:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.220 15:02:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.220 15:02:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.220 15:02:40 -- scripts/common.sh@344 -- # : 1 00:04:11.220 15:02:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.220 15:02:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.220 15:02:40 -- scripts/common.sh@364 -- # decimal 1 00:04:11.220 15:02:40 -- scripts/common.sh@352 -- # local d=1 00:04:11.220 15:02:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.220 15:02:40 -- scripts/common.sh@354 -- # echo 1 00:04:11.220 15:02:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.220 15:02:40 -- scripts/common.sh@365 -- # decimal 2 00:04:11.220 15:02:40 -- scripts/common.sh@352 -- # local d=2 00:04:11.220 15:02:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.220 15:02:40 -- scripts/common.sh@354 -- # echo 2 00:04:11.220 15:02:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.220 15:02:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.220 15:02:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.220 15:02:40 -- scripts/common.sh@367 -- # return 0 00:04:11.220 15:02:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.220 15:02:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.221 --rc genhtml_branch_coverage=1 00:04:11.221 --rc genhtml_function_coverage=1 00:04:11.221 --rc genhtml_legend=1 00:04:11.221 --rc geninfo_all_blocks=1 00:04:11.221 --rc geninfo_unexecuted_blocks=1 00:04:11.221 00:04:11.221 ' 00:04:11.221 15:02:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.221 --rc genhtml_branch_coverage=1 00:04:11.221 --rc genhtml_function_coverage=1 00:04:11.221 --rc genhtml_legend=1 00:04:11.221 --rc geninfo_all_blocks=1 00:04:11.221 --rc geninfo_unexecuted_blocks=1 00:04:11.221 00:04:11.221 ' 00:04:11.221 15:02:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.221 --rc genhtml_branch_coverage=1 00:04:11.221 --rc genhtml_function_coverage=1 00:04:11.221 --rc genhtml_legend=1 00:04:11.221 --rc geninfo_all_blocks=1 00:04:11.221 --rc geninfo_unexecuted_blocks=1 00:04:11.221 00:04:11.221 ' 00:04:11.221 15:02:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.221 --rc genhtml_branch_coverage=1 00:04:11.221 --rc genhtml_function_coverage=1 00:04:11.221 --rc genhtml_legend=1 00:04:11.221 --rc geninfo_all_blocks=1 00:04:11.221 --rc geninfo_unexecuted_blocks=1 00:04:11.221 00:04:11.221 ' 00:04:11.221 15:02:40 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:11.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.221 15:02:40 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54891 00:04:11.221 15:02:40 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:11.221 15:02:40 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.221 15:02:40 -- scheduler/scheduler.sh@37 -- # waitforlisten 54891 00:04:11.221 15:02:40 -- common/autotest_common.sh@829 -- # '[' -z 54891 ']' 00:04:11.221 15:02:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.221 15:02:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.221 15:02:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.221 15:02:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.221 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.221 [2024-11-06 15:02:40.341588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:11.221 [2024-11-06 15:02:40.341921] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54891 ] 00:04:11.221 [2024-11-06 15:02:40.481493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.480 [2024-11-06 15:02:40.538922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.480 [2024-11-06 15:02:40.539040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.480 [2024-11-06 15:02:40.539072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.480 [2024-11-06 15:02:40.539086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.480 15:02:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.480 15:02:40 -- common/autotest_common.sh@862 -- # return 0 00:04:11.480 15:02:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:11.480 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.480 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.480 POWER: Env isn't set yet! 00:04:11.480 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:11.480 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:11.480 POWER: Cannot set governor of lcore 0 to userspace 00:04:11.480 POWER: Attempting to initialise PSTAT power management... 00:04:11.480 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:11.480 POWER: Cannot set governor of lcore 0 to performance 00:04:11.480 POWER: Attempting to initialise AMD PSTATE power management... 00:04:11.480 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:11.480 POWER: Cannot set governor of lcore 0 to userspace 00:04:11.480 POWER: Attempting to initialise CPPC power management... 00:04:11.480 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:11.480 POWER: Cannot set governor of lcore 0 to userspace 00:04:11.480 POWER: Attempting to initialise VM power management... 00:04:11.480 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:11.480 POWER: Unable to set Power Management Environment for lcore 0 00:04:11.480 [2024-11-06 15:02:40.601600] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:11.480 [2024-11-06 15:02:40.601707] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:11.480 [2024-11-06 15:02:40.601752] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:11.480 [2024-11-06 15:02:40.601878] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:11.480 [2024-11-06 15:02:40.601922] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:11.480 [2024-11-06 15:02:40.602043] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:11.480 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.480 15:02:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:11.480 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.480 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.480 [2024-11-06 15:02:40.649789] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:11.480 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.480 15:02:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:11.480 15:02:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.480 15:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.480 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.480 ************************************ 00:04:11.480 START TEST scheduler_create_thread 00:04:11.481 ************************************ 00:04:11.481 15:02:40 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 2 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 3 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 4 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 5 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 6 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 7 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 8 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 9 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.481 10 00:04:11.481 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.481 15:02:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:11.481 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.481 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.740 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.740 15:02:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.740 15:02:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.740 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.740 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.740 15:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.740 15:02:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:11.740 15:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.740 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:13.116 15:02:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.116 15:02:42 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:13.116 15:02:42 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:13.116 15:02:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.117 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:04:14.052 ************************************ 00:04:14.052 END TEST scheduler_create_thread 00:04:14.052 ************************************ 00:04:14.052 15:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.052 00:04:14.052 real 0m2.616s 00:04:14.052 user 0m0.020s 00:04:14.052 sys 0m0.006s 00:04:14.052 15:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.052 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:04:14.052 15:02:43 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:14.052 15:02:43 -- scheduler/scheduler.sh@46 -- # killprocess 54891 00:04:14.052 15:02:43 -- common/autotest_common.sh@936 -- # '[' -z 54891 ']' 00:04:14.052 15:02:43 -- common/autotest_common.sh@940 -- # kill -0 54891 00:04:14.052 15:02:43 -- common/autotest_common.sh@941 -- # uname 00:04:14.052 15:02:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:14.052 15:02:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54891 00:04:14.311 killing process with pid 54891 00:04:14.311 15:02:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:14.311 15:02:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:14.311 15:02:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54891' 00:04:14.311 15:02:43 -- common/autotest_common.sh@955 -- # kill 54891 00:04:14.311 15:02:43 -- common/autotest_common.sh@960 -- # wait 54891 00:04:14.570 [2024-11-06 15:02:43.757418] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:14.829 00:04:14.829 real 0m3.828s 00:04:14.829 user 0m5.688s 00:04:14.829 sys 0m0.283s 00:04:14.830 15:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.830 ************************************ 00:04:14.830 END TEST event_scheduler 00:04:14.830 ************************************ 00:04:14.830 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:04:14.830 15:02:43 -- event/event.sh@51 -- # modprobe -n nbd 00:04:14.830 15:02:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:14.830 15:02:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.830 15:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.830 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:04:14.830 ************************************ 00:04:14.830 START TEST app_repeat 00:04:14.830 ************************************ 00:04:14.830 15:02:44 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:14.830 15:02:44 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.830 15:02:44 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.830 15:02:44 -- event/event.sh@13 -- # local nbd_list 00:04:14.830 15:02:44 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.830 15:02:44 -- event/event.sh@14 -- # local bdev_list 00:04:14.830 15:02:44 -- event/event.sh@15 -- # local repeat_times=4 00:04:14.830 15:02:44 -- event/event.sh@17 -- # modprobe nbd 00:04:14.830 Process app_repeat pid: 54972 00:04:14.830 spdk_app_start Round 0 00:04:14.830 15:02:44 -- event/event.sh@19 -- # repeat_pid=54972 00:04:14.830 15:02:44 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.830 15:02:44 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:14.830 15:02:44 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54972' 00:04:14.830 15:02:44 -- event/event.sh@23 -- # for i in {0..2} 00:04:14.830 15:02:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:14.830 15:02:44 -- event/event.sh@25 -- # waitforlisten 54972 /var/tmp/spdk-nbd.sock 00:04:14.830 15:02:44 -- common/autotest_common.sh@829 -- # '[' -z 54972 ']' 00:04:14.830 15:02:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:14.830 15:02:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.830 15:02:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:14.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:14.830 15:02:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.830 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:04:14.830 [2024-11-06 15:02:44.034828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:14.830 [2024-11-06 15:02:44.035080] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54972 ] 00:04:15.089 [2024-11-06 15:02:44.172398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.089 [2024-11-06 15:02:44.222756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.089 [2024-11-06 15:02:44.222762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.025 15:02:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.025 15:02:44 -- common/autotest_common.sh@862 -- # return 0 00:04:16.025 15:02:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.025 Malloc0 00:04:16.025 15:02:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.283 Malloc1 00:04:16.283 15:02:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@12 -- # local i 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.283 15:02:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.541 /dev/nbd0 00:04:16.541 15:02:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.541 15:02:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.541 15:02:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:16.541 15:02:45 -- common/autotest_common.sh@867 -- # local i 00:04:16.541 15:02:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:16.541 15:02:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:16.541 15:02:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:16.541 15:02:45 -- common/autotest_common.sh@871 -- # break 00:04:16.541 15:02:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:16.541 15:02:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:16.541 15:02:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.541 1+0 records in 00:04:16.541 1+0 records out 00:04:16.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057762 s, 7.1 MB/s 00:04:16.541 15:02:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:16.541 15:02:45 -- common/autotest_common.sh@884 -- # size=4096 00:04:16.541 15:02:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:16.541 15:02:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:16.541 15:02:45 -- common/autotest_common.sh@887 -- # return 0 00:04:16.541 15:02:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.541 15:02:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.541 15:02:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:16.800 /dev/nbd1 00:04:16.800 15:02:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:16.800 15:02:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:16.800 15:02:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:16.800 15:02:45 -- common/autotest_common.sh@867 -- # local i 00:04:16.800 15:02:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:16.800 15:02:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:16.801 15:02:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:16.801 15:02:46 -- common/autotest_common.sh@871 -- # break 00:04:16.801 15:02:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:16.801 15:02:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:16.801 15:02:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.801 1+0 records in 00:04:16.801 1+0 records out 00:04:16.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281547 s, 14.5 MB/s 00:04:16.801 15:02:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:16.801 15:02:46 -- common/autotest_common.sh@884 -- # size=4096 00:04:16.801 15:02:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:16.801 15:02:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:16.801 15:02:46 -- common/autotest_common.sh@887 -- # return 0 00:04:16.801 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.801 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.801 15:02:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.801 15:02:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.801 15:02:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.060 15:02:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.060 { 00:04:17.060 "nbd_device": "/dev/nbd0", 00:04:17.060 "bdev_name": "Malloc0" 00:04:17.060 }, 00:04:17.060 { 00:04:17.060 "nbd_device": "/dev/nbd1", 00:04:17.060 "bdev_name": "Malloc1" 00:04:17.060 } 00:04:17.060 ]' 00:04:17.060 15:02:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.060 { 00:04:17.060 "nbd_device": "/dev/nbd0", 00:04:17.060 "bdev_name": "Malloc0" 00:04:17.060 }, 00:04:17.060 { 00:04:17.060 "nbd_device": "/dev/nbd1", 00:04:17.060 "bdev_name": "Malloc1" 00:04:17.060 } 00:04:17.060 ]' 00:04:17.060 15:02:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.318 /dev/nbd1' 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.318 /dev/nbd1' 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.318 256+0 records in 00:04:17.318 256+0 records out 00:04:17.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007452 s, 141 MB/s 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.318 256+0 records in 00:04:17.318 256+0 records out 00:04:17.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243957 s, 43.0 MB/s 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.318 256+0 records in 00:04:17.318 256+0 records out 00:04:17.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023867 s, 43.9 MB/s 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.318 15:02:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@51 -- # local i 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.319 15:02:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@41 -- # break 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.577 15:02:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@41 -- # break 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.836 15:02:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@65 -- # true 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.095 15:02:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.096 15:02:47 -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.096 15:02:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.096 15:02:47 -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.096 15:02:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.354 15:02:47 -- event/event.sh@35 -- # sleep 3 00:04:18.613 [2024-11-06 15:02:47.664330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.613 [2024-11-06 15:02:47.712583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.613 [2024-11-06 15:02:47.712589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.613 [2024-11-06 15:02:47.740119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:18.613 [2024-11-06 15:02:47.740186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.901 spdk_app_start Round 1 00:04:21.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.901 15:02:50 -- event/event.sh@23 -- # for i in {0..2} 00:04:21.901 15:02:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:21.901 15:02:50 -- event/event.sh@25 -- # waitforlisten 54972 /var/tmp/spdk-nbd.sock 00:04:21.901 15:02:50 -- common/autotest_common.sh@829 -- # '[' -z 54972 ']' 00:04:21.901 15:02:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.901 15:02:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.901 15:02:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.901 15:02:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.901 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:21.901 15:02:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.901 15:02:50 -- common/autotest_common.sh@862 -- # return 0 00:04:21.901 15:02:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.901 Malloc0 00:04:21.901 15:02:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.161 Malloc1 00:04:22.161 15:02:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@12 -- # local i 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.161 15:02:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:22.420 /dev/nbd0 00:04:22.420 15:02:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:22.420 15:02:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:22.420 15:02:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:22.420 15:02:51 -- common/autotest_common.sh@867 -- # local i 00:04:22.420 15:02:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:22.420 15:02:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:22.420 15:02:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:22.420 15:02:51 -- common/autotest_common.sh@871 -- # break 00:04:22.420 15:02:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:22.420 15:02:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:22.420 15:02:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.420 1+0 records in 00:04:22.420 1+0 records out 00:04:22.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241896 s, 16.9 MB/s 00:04:22.420 15:02:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:22.420 15:02:51 -- common/autotest_common.sh@884 -- # size=4096 00:04:22.420 15:02:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:22.420 15:02:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:22.420 15:02:51 -- common/autotest_common.sh@887 -- # return 0 00:04:22.420 15:02:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.420 15:02:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.420 15:02:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:22.680 /dev/nbd1 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:22.680 15:02:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:22.680 15:02:51 -- common/autotest_common.sh@867 -- # local i 00:04:22.680 15:02:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:22.680 15:02:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:22.680 15:02:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:22.680 15:02:51 -- common/autotest_common.sh@871 -- # break 00:04:22.680 15:02:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:22.680 15:02:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:22.680 15:02:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.680 1+0 records in 00:04:22.680 1+0 records out 00:04:22.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422121 s, 9.7 MB/s 00:04:22.680 15:02:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:22.680 15:02:51 -- common/autotest_common.sh@884 -- # size=4096 00:04:22.680 15:02:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:22.680 15:02:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:22.680 15:02:51 -- common/autotest_common.sh@887 -- # return 0 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.680 15:02:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:22.939 { 00:04:22.939 "nbd_device": "/dev/nbd0", 00:04:22.939 "bdev_name": "Malloc0" 00:04:22.939 }, 00:04:22.939 { 00:04:22.939 "nbd_device": "/dev/nbd1", 00:04:22.939 "bdev_name": "Malloc1" 00:04:22.939 } 00:04:22.939 ]' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:22.939 { 00:04:22.939 "nbd_device": "/dev/nbd0", 00:04:22.939 "bdev_name": "Malloc0" 00:04:22.939 }, 00:04:22.939 { 00:04:22.939 "nbd_device": "/dev/nbd1", 00:04:22.939 "bdev_name": "Malloc1" 00:04:22.939 } 00:04:22.939 ]' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:22.939 /dev/nbd1' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:22.939 /dev/nbd1' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@65 -- # count=2 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@95 -- # count=2 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:22.939 256+0 records in 00:04:22.939 256+0 records out 00:04:22.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106802 s, 98.2 MB/s 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:22.939 256+0 records in 00:04:22.939 256+0 records out 00:04:22.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297815 s, 35.2 MB/s 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.939 15:02:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.199 256+0 records in 00:04:23.199 256+0 records out 00:04:23.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268992 s, 39.0 MB/s 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@51 -- # local i 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.199 15:02:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@41 -- # break 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.458 15:02:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@41 -- # break 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.717 15:02:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.976 15:02:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@65 -- # true 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@65 -- # count=0 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@104 -- # count=0 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:23.977 15:02:53 -- bdev/nbd_common.sh@109 -- # return 0 00:04:23.977 15:02:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.235 15:02:53 -- event/event.sh@35 -- # sleep 3 00:04:24.494 [2024-11-06 15:02:53.573997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.494 [2024-11-06 15:02:53.633251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.494 [2024-11-06 15:02:53.633260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.494 [2024-11-06 15:02:53.663697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.494 [2024-11-06 15:02:53.663746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:27.782 spdk_app_start Round 2 00:04:27.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.782 15:02:56 -- event/event.sh@23 -- # for i in {0..2} 00:04:27.782 15:02:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:27.782 15:02:56 -- event/event.sh@25 -- # waitforlisten 54972 /var/tmp/spdk-nbd.sock 00:04:27.782 15:02:56 -- common/autotest_common.sh@829 -- # '[' -z 54972 ']' 00:04:27.782 15:02:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.782 15:02:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.782 15:02:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.782 15:02:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.782 15:02:56 -- common/autotest_common.sh@10 -- # set +x 00:04:27.782 15:02:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.782 15:02:56 -- common/autotest_common.sh@862 -- # return 0 00:04:27.782 15:02:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.782 Malloc0 00:04:27.782 15:02:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.042 Malloc1 00:04:28.042 15:02:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@12 -- # local i 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.042 15:02:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.302 /dev/nbd0 00:04:28.302 15:02:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.302 15:02:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.302 15:02:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:28.302 15:02:57 -- common/autotest_common.sh@867 -- # local i 00:04:28.302 15:02:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:28.302 15:02:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:28.302 15:02:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:28.302 15:02:57 -- common/autotest_common.sh@871 -- # break 00:04:28.302 15:02:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:28.302 15:02:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:28.302 15:02:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.302 1+0 records in 00:04:28.302 1+0 records out 00:04:28.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220607 s, 18.6 MB/s 00:04:28.302 15:02:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:28.302 15:02:57 -- common/autotest_common.sh@884 -- # size=4096 00:04:28.302 15:02:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:28.302 15:02:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:28.302 15:02:57 -- common/autotest_common.sh@887 -- # return 0 00:04:28.302 15:02:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.302 15:02:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.302 15:02:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:28.561 /dev/nbd1 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:28.561 15:02:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:28.561 15:02:57 -- common/autotest_common.sh@867 -- # local i 00:04:28.561 15:02:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:28.561 15:02:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:28.561 15:02:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:28.561 15:02:57 -- common/autotest_common.sh@871 -- # break 00:04:28.561 15:02:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:28.561 15:02:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:28.561 15:02:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.561 1+0 records in 00:04:28.561 1+0 records out 00:04:28.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228703 s, 17.9 MB/s 00:04:28.561 15:02:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:28.561 15:02:57 -- common/autotest_common.sh@884 -- # size=4096 00:04:28.561 15:02:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:28.561 15:02:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:28.561 15:02:57 -- common/autotest_common.sh@887 -- # return 0 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.561 15:02:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.821 15:02:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.821 { 00:04:28.821 "nbd_device": "/dev/nbd0", 00:04:28.821 "bdev_name": "Malloc0" 00:04:28.821 }, 00:04:28.821 { 00:04:28.821 "nbd_device": "/dev/nbd1", 00:04:28.821 "bdev_name": "Malloc1" 00:04:28.821 } 00:04:28.821 ]' 00:04:28.821 15:02:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.821 { 00:04:28.821 "nbd_device": "/dev/nbd0", 00:04:28.821 "bdev_name": "Malloc0" 00:04:28.821 }, 00:04:28.821 { 00:04:28.821 "nbd_device": "/dev/nbd1", 00:04:28.821 "bdev_name": "Malloc1" 00:04:28.821 } 00:04:28.821 ]' 00:04:28.821 15:02:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:28.821 /dev/nbd1' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:28.821 /dev/nbd1' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@65 -- # count=2 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@95 -- # count=2 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:28.821 256+0 records in 00:04:28.821 256+0 records out 00:04:28.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00938532 s, 112 MB/s 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:28.821 256+0 records in 00:04:28.821 256+0 records out 00:04:28.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248669 s, 42.2 MB/s 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:28.821 256+0 records in 00:04:28.821 256+0 records out 00:04:28.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246399 s, 42.6 MB/s 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:28.821 15:02:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@51 -- # local i 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@41 -- # break 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.080 15:02:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@41 -- # break 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.647 15:02:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@65 -- # true 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.906 15:02:58 -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.906 15:02:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.165 15:02:59 -- event/event.sh@35 -- # sleep 3 00:04:30.165 [2024-11-06 15:02:59.321102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.165 [2024-11-06 15:02:59.371151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.165 [2024-11-06 15:02:59.371162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.165 [2024-11-06 15:02:59.399053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.165 [2024-11-06 15:02:59.399109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:33.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.452 15:03:02 -- event/event.sh@38 -- # waitforlisten 54972 /var/tmp/spdk-nbd.sock 00:04:33.452 15:03:02 -- common/autotest_common.sh@829 -- # '[' -z 54972 ']' 00:04:33.452 15:03:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.452 15:03:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.452 15:03:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.452 15:03:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.452 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:33.452 15:03:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.452 15:03:02 -- common/autotest_common.sh@862 -- # return 0 00:04:33.452 15:03:02 -- event/event.sh@39 -- # killprocess 54972 00:04:33.452 15:03:02 -- common/autotest_common.sh@936 -- # '[' -z 54972 ']' 00:04:33.452 15:03:02 -- common/autotest_common.sh@940 -- # kill -0 54972 00:04:33.452 15:03:02 -- common/autotest_common.sh@941 -- # uname 00:04:33.452 15:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.452 15:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54972 00:04:33.452 killing process with pid 54972 00:04:33.452 15:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.452 15:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.452 15:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54972' 00:04:33.452 15:03:02 -- common/autotest_common.sh@955 -- # kill 54972 00:04:33.452 15:03:02 -- common/autotest_common.sh@960 -- # wait 54972 00:04:33.452 spdk_app_start is called in Round 0. 00:04:33.452 Shutdown signal received, stop current app iteration 00:04:33.452 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:33.452 spdk_app_start is called in Round 1. 00:04:33.452 Shutdown signal received, stop current app iteration 00:04:33.452 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:33.452 spdk_app_start is called in Round 2. 00:04:33.452 Shutdown signal received, stop current app iteration 00:04:33.452 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:33.452 spdk_app_start is called in Round 3. 00:04:33.452 Shutdown signal received, stop current app iteration 00:04:33.452 ************************************ 00:04:33.452 END TEST app_repeat 00:04:33.452 ************************************ 00:04:33.452 15:03:02 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:33.452 15:03:02 -- event/event.sh@42 -- # return 0 00:04:33.452 00:04:33.452 real 0m18.643s 00:04:33.452 user 0m42.200s 00:04:33.452 sys 0m2.498s 00:04:33.452 15:03:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.452 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:33.452 15:03:02 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:33.452 15:03:02 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:33.452 15:03:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.452 15:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.452 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:33.452 ************************************ 00:04:33.452 START TEST cpu_locks 00:04:33.452 ************************************ 00:04:33.452 15:03:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:33.711 * Looking for test storage... 00:04:33.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:33.711 15:03:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.711 15:03:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.711 15:03:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.711 15:03:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.711 15:03:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.711 15:03:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.711 15:03:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.711 15:03:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.711 15:03:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.711 15:03:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.711 15:03:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.711 15:03:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.711 15:03:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.711 15:03:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.711 15:03:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.711 15:03:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.711 15:03:02 -- scripts/common.sh@344 -- # : 1 00:04:33.711 15:03:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.711 15:03:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.711 15:03:02 -- scripts/common.sh@364 -- # decimal 1 00:04:33.711 15:03:02 -- scripts/common.sh@352 -- # local d=1 00:04:33.711 15:03:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.711 15:03:02 -- scripts/common.sh@354 -- # echo 1 00:04:33.711 15:03:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.711 15:03:02 -- scripts/common.sh@365 -- # decimal 2 00:04:33.711 15:03:02 -- scripts/common.sh@352 -- # local d=2 00:04:33.711 15:03:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.711 15:03:02 -- scripts/common.sh@354 -- # echo 2 00:04:33.711 15:03:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.711 15:03:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.711 15:03:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.711 15:03:02 -- scripts/common.sh@367 -- # return 0 00:04:33.711 15:03:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.711 15:03:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.711 --rc genhtml_branch_coverage=1 00:04:33.711 --rc genhtml_function_coverage=1 00:04:33.711 --rc genhtml_legend=1 00:04:33.711 --rc geninfo_all_blocks=1 00:04:33.711 --rc geninfo_unexecuted_blocks=1 00:04:33.711 00:04:33.711 ' 00:04:33.711 15:03:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.711 --rc genhtml_branch_coverage=1 00:04:33.711 --rc genhtml_function_coverage=1 00:04:33.711 --rc genhtml_legend=1 00:04:33.711 --rc geninfo_all_blocks=1 00:04:33.711 --rc geninfo_unexecuted_blocks=1 00:04:33.711 00:04:33.711 ' 00:04:33.711 15:03:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.711 --rc genhtml_branch_coverage=1 00:04:33.711 --rc genhtml_function_coverage=1 00:04:33.711 --rc genhtml_legend=1 00:04:33.711 --rc geninfo_all_blocks=1 00:04:33.711 --rc geninfo_unexecuted_blocks=1 00:04:33.711 00:04:33.711 ' 00:04:33.711 15:03:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.711 --rc genhtml_branch_coverage=1 00:04:33.711 --rc genhtml_function_coverage=1 00:04:33.711 --rc genhtml_legend=1 00:04:33.711 --rc geninfo_all_blocks=1 00:04:33.711 --rc geninfo_unexecuted_blocks=1 00:04:33.711 00:04:33.711 ' 00:04:33.711 15:03:02 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:33.711 15:03:02 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:33.711 15:03:02 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:33.711 15:03:02 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:33.711 15:03:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.711 15:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.711 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:33.711 ************************************ 00:04:33.711 START TEST default_locks 00:04:33.711 ************************************ 00:04:33.711 15:03:02 -- common/autotest_common.sh@1114 -- # default_locks 00:04:33.711 15:03:02 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55412 00:04:33.711 15:03:02 -- event/cpu_locks.sh@47 -- # waitforlisten 55412 00:04:33.711 15:03:02 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.711 15:03:02 -- common/autotest_common.sh@829 -- # '[' -z 55412 ']' 00:04:33.712 15:03:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.712 15:03:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.712 15:03:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.712 15:03:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.712 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:33.712 [2024-11-06 15:03:02.980095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:33.712 [2024-11-06 15:03:02.980379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55412 ] 00:04:33.971 [2024-11-06 15:03:03.112844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.971 [2024-11-06 15:03:03.163211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:33.971 [2024-11-06 15:03:03.163590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.907 15:03:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.907 15:03:03 -- common/autotest_common.sh@862 -- # return 0 00:04:34.907 15:03:03 -- event/cpu_locks.sh@49 -- # locks_exist 55412 00:04:34.907 15:03:03 -- event/cpu_locks.sh@22 -- # lslocks -p 55412 00:04:34.907 15:03:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.167 15:03:04 -- event/cpu_locks.sh@50 -- # killprocess 55412 00:04:35.167 15:03:04 -- common/autotest_common.sh@936 -- # '[' -z 55412 ']' 00:04:35.167 15:03:04 -- common/autotest_common.sh@940 -- # kill -0 55412 00:04:35.167 15:03:04 -- common/autotest_common.sh@941 -- # uname 00:04:35.167 15:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.167 15:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55412 00:04:35.167 killing process with pid 55412 00:04:35.167 15:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:35.167 15:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:35.167 15:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55412' 00:04:35.167 15:03:04 -- common/autotest_common.sh@955 -- # kill 55412 00:04:35.167 15:03:04 -- common/autotest_common.sh@960 -- # wait 55412 00:04:35.426 15:03:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55412 00:04:35.426 15:03:04 -- common/autotest_common.sh@650 -- # local es=0 00:04:35.426 15:03:04 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55412 00:04:35.426 15:03:04 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:35.426 15:03:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.426 15:03:04 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:35.426 15:03:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.426 15:03:04 -- common/autotest_common.sh@653 -- # waitforlisten 55412 00:04:35.426 15:03:04 -- common/autotest_common.sh@829 -- # '[' -z 55412 ']' 00:04:35.426 15:03:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.426 15:03:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.426 15:03:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.426 ERROR: process (pid: 55412) is no longer running 00:04:35.426 15:03:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.426 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:35.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55412) - No such process 00:04:35.426 15:03:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.426 15:03:04 -- common/autotest_common.sh@862 -- # return 1 00:04:35.426 15:03:04 -- common/autotest_common.sh@653 -- # es=1 00:04:35.426 15:03:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.426 15:03:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.427 15:03:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.427 15:03:04 -- event/cpu_locks.sh@54 -- # no_locks 00:04:35.427 15:03:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.427 15:03:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.427 15:03:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.427 00:04:35.427 real 0m1.657s 00:04:35.427 user 0m1.938s 00:04:35.427 sys 0m0.376s 00:04:35.427 15:03:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.427 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:35.427 ************************************ 00:04:35.427 END TEST default_locks 00:04:35.427 ************************************ 00:04:35.427 15:03:04 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:35.427 15:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.427 15:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.427 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:35.427 ************************************ 00:04:35.427 START TEST default_locks_via_rpc 00:04:35.427 ************************************ 00:04:35.427 15:03:04 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:04:35.427 15:03:04 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55464 00:04:35.427 15:03:04 -- event/cpu_locks.sh@63 -- # waitforlisten 55464 00:04:35.427 15:03:04 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.427 15:03:04 -- common/autotest_common.sh@829 -- # '[' -z 55464 ']' 00:04:35.427 15:03:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.427 15:03:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.427 15:03:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.427 15:03:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.427 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:35.686 [2024-11-06 15:03:04.705430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.686 [2024-11-06 15:03:04.706164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55464 ] 00:04:35.686 [2024-11-06 15:03:04.843420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.686 [2024-11-06 15:03:04.892292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.686 [2024-11-06 15:03:04.892446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.621 15:03:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.621 15:03:05 -- common/autotest_common.sh@862 -- # return 0 00:04:36.621 15:03:05 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:36.621 15:03:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.621 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:04:36.621 15:03:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.621 15:03:05 -- event/cpu_locks.sh@67 -- # no_locks 00:04:36.621 15:03:05 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.621 15:03:05 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.621 15:03:05 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.621 15:03:05 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:36.621 15:03:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.621 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:04:36.621 15:03:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.621 15:03:05 -- event/cpu_locks.sh@71 -- # locks_exist 55464 00:04:36.621 15:03:05 -- event/cpu_locks.sh@22 -- # lslocks -p 55464 00:04:36.621 15:03:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.885 15:03:05 -- event/cpu_locks.sh@73 -- # killprocess 55464 00:04:36.885 15:03:05 -- common/autotest_common.sh@936 -- # '[' -z 55464 ']' 00:04:36.885 15:03:05 -- common/autotest_common.sh@940 -- # kill -0 55464 00:04:36.885 15:03:05 -- common/autotest_common.sh@941 -- # uname 00:04:36.885 15:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.885 15:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55464 00:04:36.885 killing process with pid 55464 00:04:36.885 15:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.885 15:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.885 15:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55464' 00:04:36.885 15:03:05 -- common/autotest_common.sh@955 -- # kill 55464 00:04:36.885 15:03:05 -- common/autotest_common.sh@960 -- # wait 55464 00:04:37.157 ************************************ 00:04:37.157 END TEST default_locks_via_rpc 00:04:37.157 ************************************ 00:04:37.157 00:04:37.157 real 0m1.583s 00:04:37.157 user 0m1.816s 00:04:37.157 sys 0m0.365s 00:04:37.157 15:03:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.157 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.157 15:03:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.157 15:03:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.157 15:03:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.157 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.157 ************************************ 00:04:37.157 START TEST non_locking_app_on_locked_coremask 00:04:37.157 ************************************ 00:04:37.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.157 15:03:06 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:04:37.157 15:03:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55509 00:04:37.157 15:03:06 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.157 15:03:06 -- event/cpu_locks.sh@81 -- # waitforlisten 55509 /var/tmp/spdk.sock 00:04:37.157 15:03:06 -- common/autotest_common.sh@829 -- # '[' -z 55509 ']' 00:04:37.157 15:03:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.157 15:03:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.157 15:03:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.157 15:03:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.157 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.157 [2024-11-06 15:03:06.323503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:37.157 [2024-11-06 15:03:06.323609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55509 ] 00:04:37.432 [2024-11-06 15:03:06.451872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.432 [2024-11-06 15:03:06.502549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:37.432 [2024-11-06 15:03:06.502761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.370 15:03:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.370 15:03:07 -- common/autotest_common.sh@862 -- # return 0 00:04:38.370 15:03:07 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:38.370 15:03:07 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55525 00:04:38.370 15:03:07 -- event/cpu_locks.sh@85 -- # waitforlisten 55525 /var/tmp/spdk2.sock 00:04:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.370 15:03:07 -- common/autotest_common.sh@829 -- # '[' -z 55525 ']' 00:04:38.370 15:03:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.370 15:03:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.370 15:03:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.370 15:03:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.370 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:38.370 [2024-11-06 15:03:07.357696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.370 [2024-11-06 15:03:07.358248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55525 ] 00:04:38.370 [2024-11-06 15:03:07.489832] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:38.370 [2024-11-06 15:03:07.489881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.370 [2024-11-06 15:03:07.598014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.370 [2024-11-06 15:03:07.598155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.308 15:03:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.308 15:03:08 -- common/autotest_common.sh@862 -- # return 0 00:04:39.308 15:03:08 -- event/cpu_locks.sh@87 -- # locks_exist 55509 00:04:39.308 15:03:08 -- event/cpu_locks.sh@22 -- # lslocks -p 55509 00:04:39.308 15:03:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.245 15:03:09 -- event/cpu_locks.sh@89 -- # killprocess 55509 00:04:40.245 15:03:09 -- common/autotest_common.sh@936 -- # '[' -z 55509 ']' 00:04:40.245 15:03:09 -- common/autotest_common.sh@940 -- # kill -0 55509 00:04:40.245 15:03:09 -- common/autotest_common.sh@941 -- # uname 00:04:40.245 15:03:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.245 15:03:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55509 00:04:40.245 killing process with pid 55509 00:04:40.245 15:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.245 15:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.245 15:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55509' 00:04:40.245 15:03:09 -- common/autotest_common.sh@955 -- # kill 55509 00:04:40.245 15:03:09 -- common/autotest_common.sh@960 -- # wait 55509 00:04:40.505 15:03:09 -- event/cpu_locks.sh@90 -- # killprocess 55525 00:04:40.505 15:03:09 -- common/autotest_common.sh@936 -- # '[' -z 55525 ']' 00:04:40.505 15:03:09 -- common/autotest_common.sh@940 -- # kill -0 55525 00:04:40.505 15:03:09 -- common/autotest_common.sh@941 -- # uname 00:04:40.505 15:03:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.505 15:03:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55525 00:04:40.505 killing process with pid 55525 00:04:40.505 15:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.505 15:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.505 15:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55525' 00:04:40.505 15:03:09 -- common/autotest_common.sh@955 -- # kill 55525 00:04:40.505 15:03:09 -- common/autotest_common.sh@960 -- # wait 55525 00:04:40.764 ************************************ 00:04:40.764 END TEST non_locking_app_on_locked_coremask 00:04:40.764 ************************************ 00:04:40.764 00:04:40.764 real 0m3.730s 00:04:40.764 user 0m4.432s 00:04:40.764 sys 0m0.843s 00:04:40.764 15:03:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.764 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:41.023 15:03:10 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:41.023 15:03:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.023 15:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.023 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:41.023 ************************************ 00:04:41.023 START TEST locking_app_on_unlocked_coremask 00:04:41.023 ************************************ 00:04:41.023 15:03:10 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:04:41.023 15:03:10 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55587 00:04:41.023 15:03:10 -- event/cpu_locks.sh@99 -- # waitforlisten 55587 /var/tmp/spdk.sock 00:04:41.023 15:03:10 -- common/autotest_common.sh@829 -- # '[' -z 55587 ']' 00:04:41.023 15:03:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.023 15:03:10 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:41.023 15:03:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.023 15:03:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.023 15:03:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.023 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:41.023 [2024-11-06 15:03:10.119970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.023 [2024-11-06 15:03:10.120887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55587 ] 00:04:41.023 [2024-11-06 15:03:10.264757] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:41.023 [2024-11-06 15:03:10.264794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.282 [2024-11-06 15:03:10.320436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.282 [2024-11-06 15:03:10.320837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.850 15:03:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.850 15:03:11 -- common/autotest_common.sh@862 -- # return 0 00:04:41.850 15:03:11 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:41.850 15:03:11 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55603 00:04:41.850 15:03:11 -- event/cpu_locks.sh@103 -- # waitforlisten 55603 /var/tmp/spdk2.sock 00:04:41.850 15:03:11 -- common/autotest_common.sh@829 -- # '[' -z 55603 ']' 00:04:41.850 15:03:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.850 15:03:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.850 15:03:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.850 15:03:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.850 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:04:42.109 [2024-11-06 15:03:11.138527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.109 [2024-11-06 15:03:11.139232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55603 ] 00:04:42.109 [2024-11-06 15:03:11.275057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.109 [2024-11-06 15:03:11.382503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.109 [2024-11-06 15:03:11.382680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.046 15:03:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.046 15:03:12 -- common/autotest_common.sh@862 -- # return 0 00:04:43.046 15:03:12 -- event/cpu_locks.sh@105 -- # locks_exist 55603 00:04:43.046 15:03:12 -- event/cpu_locks.sh@22 -- # lslocks -p 55603 00:04:43.046 15:03:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.613 15:03:12 -- event/cpu_locks.sh@107 -- # killprocess 55587 00:04:43.613 15:03:12 -- common/autotest_common.sh@936 -- # '[' -z 55587 ']' 00:04:43.613 15:03:12 -- common/autotest_common.sh@940 -- # kill -0 55587 00:04:43.613 15:03:12 -- common/autotest_common.sh@941 -- # uname 00:04:43.613 15:03:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.613 15:03:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55587 00:04:43.613 15:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.613 15:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.613 killing process with pid 55587 00:04:43.613 15:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55587' 00:04:43.613 15:03:12 -- common/autotest_common.sh@955 -- # kill 55587 00:04:43.613 15:03:12 -- common/autotest_common.sh@960 -- # wait 55587 00:04:44.181 15:03:13 -- event/cpu_locks.sh@108 -- # killprocess 55603 00:04:44.181 15:03:13 -- common/autotest_common.sh@936 -- # '[' -z 55603 ']' 00:04:44.181 15:03:13 -- common/autotest_common.sh@940 -- # kill -0 55603 00:04:44.181 15:03:13 -- common/autotest_common.sh@941 -- # uname 00:04:44.181 15:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.181 15:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55603 00:04:44.181 killing process with pid 55603 00:04:44.181 15:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.181 15:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.181 15:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55603' 00:04:44.181 15:03:13 -- common/autotest_common.sh@955 -- # kill 55603 00:04:44.181 15:03:13 -- common/autotest_common.sh@960 -- # wait 55603 00:04:44.440 00:04:44.440 real 0m3.632s 00:04:44.440 user 0m4.243s 00:04:44.440 sys 0m0.867s 00:04:44.440 15:03:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.440 ************************************ 00:04:44.440 END TEST locking_app_on_unlocked_coremask 00:04:44.440 ************************************ 00:04:44.440 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:44.700 15:03:13 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:44.700 15:03:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.700 15:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.700 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:44.700 ************************************ 00:04:44.700 START TEST locking_app_on_locked_coremask 00:04:44.700 ************************************ 00:04:44.700 15:03:13 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:04:44.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.700 15:03:13 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55670 00:04:44.700 15:03:13 -- event/cpu_locks.sh@116 -- # waitforlisten 55670 /var/tmp/spdk.sock 00:04:44.700 15:03:13 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.700 15:03:13 -- common/autotest_common.sh@829 -- # '[' -z 55670 ']' 00:04:44.700 15:03:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.700 15:03:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.700 15:03:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.700 15:03:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.700 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:44.700 [2024-11-06 15:03:13.810064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:44.700 [2024-11-06 15:03:13.810884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55670 ] 00:04:44.700 [2024-11-06 15:03:13.952581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.959 [2024-11-06 15:03:14.002201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.959 [2024-11-06 15:03:14.002365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.527 15:03:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.527 15:03:14 -- common/autotest_common.sh@862 -- # return 0 00:04:45.527 15:03:14 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:45.527 15:03:14 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55686 00:04:45.527 15:03:14 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55686 /var/tmp/spdk2.sock 00:04:45.527 15:03:14 -- common/autotest_common.sh@650 -- # local es=0 00:04:45.527 15:03:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55686 /var/tmp/spdk2.sock 00:04:45.527 15:03:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:45.527 15:03:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.527 15:03:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:45.527 15:03:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.527 15:03:14 -- common/autotest_common.sh@653 -- # waitforlisten 55686 /var/tmp/spdk2.sock 00:04:45.527 15:03:14 -- common/autotest_common.sh@829 -- # '[' -z 55686 ']' 00:04:45.527 15:03:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.527 15:03:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.527 15:03:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.527 15:03:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.527 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:45.527 [2024-11-06 15:03:14.784966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.527 [2024-11-06 15:03:14.785634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55686 ] 00:04:45.786 [2024-11-06 15:03:14.922336] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55670 has claimed it. 00:04:45.786 [2024-11-06 15:03:14.922415] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:46.354 ERROR: process (pid: 55686) is no longer running 00:04:46.354 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55686) - No such process 00:04:46.354 15:03:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.354 15:03:15 -- common/autotest_common.sh@862 -- # return 1 00:04:46.354 15:03:15 -- common/autotest_common.sh@653 -- # es=1 00:04:46.354 15:03:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.354 15:03:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.354 15:03:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.354 15:03:15 -- event/cpu_locks.sh@122 -- # locks_exist 55670 00:04:46.354 15:03:15 -- event/cpu_locks.sh@22 -- # lslocks -p 55670 00:04:46.354 15:03:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.922 15:03:15 -- event/cpu_locks.sh@124 -- # killprocess 55670 00:04:46.922 15:03:15 -- common/autotest_common.sh@936 -- # '[' -z 55670 ']' 00:04:46.922 15:03:15 -- common/autotest_common.sh@940 -- # kill -0 55670 00:04:46.922 15:03:15 -- common/autotest_common.sh@941 -- # uname 00:04:46.922 15:03:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.922 15:03:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55670 00:04:46.922 killing process with pid 55670 00:04:46.922 15:03:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.922 15:03:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.922 15:03:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55670' 00:04:46.922 15:03:15 -- common/autotest_common.sh@955 -- # kill 55670 00:04:46.922 15:03:15 -- common/autotest_common.sh@960 -- # wait 55670 00:04:47.181 00:04:47.181 real 0m2.511s 00:04:47.181 user 0m3.011s 00:04:47.181 sys 0m0.505s 00:04:47.181 15:03:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.181 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:47.181 ************************************ 00:04:47.181 END TEST locking_app_on_locked_coremask 00:04:47.181 ************************************ 00:04:47.181 15:03:16 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:47.181 15:03:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.181 15:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.181 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:47.181 ************************************ 00:04:47.181 START TEST locking_overlapped_coremask 00:04:47.181 ************************************ 00:04:47.181 15:03:16 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:04:47.181 15:03:16 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55726 00:04:47.181 15:03:16 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:47.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.181 15:03:16 -- event/cpu_locks.sh@133 -- # waitforlisten 55726 /var/tmp/spdk.sock 00:04:47.181 15:03:16 -- common/autotest_common.sh@829 -- # '[' -z 55726 ']' 00:04:47.181 15:03:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.181 15:03:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.181 15:03:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.181 15:03:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.181 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:47.181 [2024-11-06 15:03:16.358559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:47.181 [2024-11-06 15:03:16.358647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55726 ] 00:04:47.441 [2024-11-06 15:03:16.488907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.441 [2024-11-06 15:03:16.544278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.441 [2024-11-06 15:03:16.544883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.441 [2024-11-06 15:03:16.545024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.441 [2024-11-06 15:03:16.545029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.377 15:03:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.377 15:03:17 -- common/autotest_common.sh@862 -- # return 0 00:04:48.377 15:03:17 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55744 00:04:48.377 15:03:17 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:48.377 15:03:17 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55744 /var/tmp/spdk2.sock 00:04:48.377 15:03:17 -- common/autotest_common.sh@650 -- # local es=0 00:04:48.377 15:03:17 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55744 /var/tmp/spdk2.sock 00:04:48.377 15:03:17 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.377 15:03:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.377 15:03:17 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.377 15:03:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.377 15:03:17 -- common/autotest_common.sh@653 -- # waitforlisten 55744 /var/tmp/spdk2.sock 00:04:48.377 15:03:17 -- common/autotest_common.sh@829 -- # '[' -z 55744 ']' 00:04:48.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.377 15:03:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.377 15:03:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.377 15:03:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.377 15:03:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.377 15:03:17 -- common/autotest_common.sh@10 -- # set +x 00:04:48.377 [2024-11-06 15:03:17.419273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:48.377 [2024-11-06 15:03:17.419592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55744 ] 00:04:48.377 [2024-11-06 15:03:17.565259] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55726 has claimed it. 00:04:48.377 [2024-11-06 15:03:17.565330] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:48.945 ERROR: process (pid: 55744) is no longer running 00:04:48.945 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55744) - No such process 00:04:48.945 15:03:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.945 15:03:18 -- common/autotest_common.sh@862 -- # return 1 00:04:48.945 15:03:18 -- common/autotest_common.sh@653 -- # es=1 00:04:48.945 15:03:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.945 15:03:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.945 15:03:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.945 15:03:18 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:48.945 15:03:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.945 15:03:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.945 15:03:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.945 15:03:18 -- event/cpu_locks.sh@141 -- # killprocess 55726 00:04:48.945 15:03:18 -- common/autotest_common.sh@936 -- # '[' -z 55726 ']' 00:04:48.945 15:03:18 -- common/autotest_common.sh@940 -- # kill -0 55726 00:04:48.945 15:03:18 -- common/autotest_common.sh@941 -- # uname 00:04:48.945 15:03:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.945 15:03:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55726 00:04:48.945 15:03:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.945 15:03:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.945 15:03:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55726' 00:04:48.945 killing process with pid 55726 00:04:48.945 15:03:18 -- common/autotest_common.sh@955 -- # kill 55726 00:04:48.945 15:03:18 -- common/autotest_common.sh@960 -- # wait 55726 00:04:49.205 00:04:49.205 real 0m2.107s 00:04:49.205 user 0m6.084s 00:04:49.205 sys 0m0.329s 00:04:49.205 15:03:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.205 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.205 ************************************ 00:04:49.205 END TEST locking_overlapped_coremask 00:04:49.205 ************************************ 00:04:49.205 15:03:18 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:49.205 15:03:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.205 15:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.205 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.205 ************************************ 00:04:49.205 START TEST locking_overlapped_coremask_via_rpc 00:04:49.205 ************************************ 00:04:49.205 15:03:18 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:04:49.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.205 15:03:18 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55784 00:04:49.205 15:03:18 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:49.205 15:03:18 -- event/cpu_locks.sh@149 -- # waitforlisten 55784 /var/tmp/spdk.sock 00:04:49.205 15:03:18 -- common/autotest_common.sh@829 -- # '[' -z 55784 ']' 00:04:49.205 15:03:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.205 15:03:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.205 15:03:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.205 15:03:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.205 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.464 [2024-11-06 15:03:18.532039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.464 [2024-11-06 15:03:18.532138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55784 ] 00:04:49.464 [2024-11-06 15:03:18.667437] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.464 [2024-11-06 15:03:18.667480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.464 [2024-11-06 15:03:18.722860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.464 [2024-11-06 15:03:18.723323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.464 [2024-11-06 15:03:18.723466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.464 [2024-11-06 15:03:18.723476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.401 15:03:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.401 15:03:19 -- common/autotest_common.sh@862 -- # return 0 00:04:50.401 15:03:19 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55802 00:04:50.401 15:03:19 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.401 15:03:19 -- event/cpu_locks.sh@153 -- # waitforlisten 55802 /var/tmp/spdk2.sock 00:04:50.401 15:03:19 -- common/autotest_common.sh@829 -- # '[' -z 55802 ']' 00:04:50.401 15:03:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.401 15:03:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.401 15:03:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.401 15:03:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.401 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:50.401 [2024-11-06 15:03:19.501646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.402 [2024-11-06 15:03:19.501985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55802 ] 00:04:50.402 [2024-11-06 15:03:19.649347] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.402 [2024-11-06 15:03:19.649431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.660 [2024-11-06 15:03:19.766950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.660 [2024-11-06 15:03:19.767176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.660 [2024-11-06 15:03:19.771719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:50.660 [2024-11-06 15:03:19.771720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.597 15:03:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.597 15:03:20 -- common/autotest_common.sh@862 -- # return 0 00:04:51.597 15:03:20 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.597 15:03:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.597 15:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.597 15:03:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.597 15:03:20 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.597 15:03:20 -- common/autotest_common.sh@650 -- # local es=0 00:04:51.597 15:03:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.597 15:03:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:51.597 15:03:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.597 15:03:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:51.598 15:03:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.598 15:03:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.598 15:03:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.598 15:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.598 [2024-11-06 15:03:20.532918] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55784 has claimed it. 00:04:51.598 request: 00:04:51.598 { 00:04:51.598 "method": "framework_enable_cpumask_locks", 00:04:51.598 "req_id": 1 00:04:51.598 } 00:04:51.598 Got JSON-RPC error response 00:04:51.598 response: 00:04:51.598 { 00:04:51.598 "code": -32603, 00:04:51.598 "message": "Failed to claim CPU core: 2" 00:04:51.598 } 00:04:51.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.598 15:03:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:51.598 15:03:20 -- common/autotest_common.sh@653 -- # es=1 00:04:51.598 15:03:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:51.598 15:03:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:51.598 15:03:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:51.598 15:03:20 -- event/cpu_locks.sh@158 -- # waitforlisten 55784 /var/tmp/spdk.sock 00:04:51.598 15:03:20 -- common/autotest_common.sh@829 -- # '[' -z 55784 ']' 00:04:51.598 15:03:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.598 15:03:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.598 15:03:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.598 15:03:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.598 15:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.598 15:03:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.598 15:03:20 -- common/autotest_common.sh@862 -- # return 0 00:04:51.598 15:03:20 -- event/cpu_locks.sh@159 -- # waitforlisten 55802 /var/tmp/spdk2.sock 00:04:51.598 15:03:20 -- common/autotest_common.sh@829 -- # '[' -z 55802 ']' 00:04:51.598 15:03:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.598 15:03:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.598 15:03:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.598 15:03:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.598 15:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.856 ************************************ 00:04:51.856 END TEST locking_overlapped_coremask_via_rpc 00:04:51.856 ************************************ 00:04:51.856 15:03:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.856 15:03:21 -- common/autotest_common.sh@862 -- # return 0 00:04:51.856 15:03:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:51.856 15:03:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:51.856 15:03:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:51.856 15:03:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:51.856 00:04:51.856 real 0m2.651s 00:04:51.856 user 0m1.400s 00:04:51.856 sys 0m0.190s 00:04:51.856 15:03:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.856 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:52.116 15:03:21 -- event/cpu_locks.sh@174 -- # cleanup 00:04:52.116 15:03:21 -- event/cpu_locks.sh@15 -- # [[ -z 55784 ]] 00:04:52.116 15:03:21 -- event/cpu_locks.sh@15 -- # killprocess 55784 00:04:52.116 15:03:21 -- common/autotest_common.sh@936 -- # '[' -z 55784 ']' 00:04:52.116 15:03:21 -- common/autotest_common.sh@940 -- # kill -0 55784 00:04:52.116 15:03:21 -- common/autotest_common.sh@941 -- # uname 00:04:52.116 15:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.116 15:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55784 00:04:52.116 killing process with pid 55784 00:04:52.116 15:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.116 15:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.116 15:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55784' 00:04:52.116 15:03:21 -- common/autotest_common.sh@955 -- # kill 55784 00:04:52.116 15:03:21 -- common/autotest_common.sh@960 -- # wait 55784 00:04:52.375 15:03:21 -- event/cpu_locks.sh@16 -- # [[ -z 55802 ]] 00:04:52.375 15:03:21 -- event/cpu_locks.sh@16 -- # killprocess 55802 00:04:52.375 15:03:21 -- common/autotest_common.sh@936 -- # '[' -z 55802 ']' 00:04:52.375 15:03:21 -- common/autotest_common.sh@940 -- # kill -0 55802 00:04:52.375 15:03:21 -- common/autotest_common.sh@941 -- # uname 00:04:52.375 15:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.375 15:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55802 00:04:52.375 killing process with pid 55802 00:04:52.375 15:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:52.375 15:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:52.375 15:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55802' 00:04:52.375 15:03:21 -- common/autotest_common.sh@955 -- # kill 55802 00:04:52.375 15:03:21 -- common/autotest_common.sh@960 -- # wait 55802 00:04:52.634 15:03:21 -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.634 Process with pid 55784 is not found 00:04:52.634 Process with pid 55802 is not found 00:04:52.634 15:03:21 -- event/cpu_locks.sh@1 -- # cleanup 00:04:52.634 15:03:21 -- event/cpu_locks.sh@15 -- # [[ -z 55784 ]] 00:04:52.634 15:03:21 -- event/cpu_locks.sh@15 -- # killprocess 55784 00:04:52.634 15:03:21 -- common/autotest_common.sh@936 -- # '[' -z 55784 ']' 00:04:52.634 15:03:21 -- common/autotest_common.sh@940 -- # kill -0 55784 00:04:52.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55784) - No such process 00:04:52.634 15:03:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55784 is not found' 00:04:52.634 15:03:21 -- event/cpu_locks.sh@16 -- # [[ -z 55802 ]] 00:04:52.634 15:03:21 -- event/cpu_locks.sh@16 -- # killprocess 55802 00:04:52.634 15:03:21 -- common/autotest_common.sh@936 -- # '[' -z 55802 ']' 00:04:52.634 15:03:21 -- common/autotest_common.sh@940 -- # kill -0 55802 00:04:52.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55802) - No such process 00:04:52.634 15:03:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55802 is not found' 00:04:52.634 15:03:21 -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.634 00:04:52.634 real 0m19.114s 00:04:52.634 user 0m35.372s 00:04:52.634 sys 0m4.144s 00:04:52.634 15:03:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.634 ************************************ 00:04:52.634 END TEST cpu_locks 00:04:52.634 ************************************ 00:04:52.634 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:52.634 00:04:52.634 real 0m45.942s 00:04:52.634 user 1m29.887s 00:04:52.634 sys 0m7.301s 00:04:52.634 15:03:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.634 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:52.634 ************************************ 00:04:52.634 END TEST event 00:04:52.634 ************************************ 00:04:52.634 15:03:21 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:52.634 15:03:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.634 15:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.634 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:52.634 ************************************ 00:04:52.634 START TEST thread 00:04:52.635 ************************************ 00:04:52.635 15:03:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:52.894 * Looking for test storage... 00:04:52.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:52.894 15:03:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.894 15:03:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.894 15:03:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.894 15:03:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.894 15:03:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.894 15:03:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.894 15:03:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.894 15:03:22 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.894 15:03:22 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.894 15:03:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.894 15:03:22 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.894 15:03:22 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.894 15:03:22 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.894 15:03:22 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.894 15:03:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.894 15:03:22 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.894 15:03:22 -- scripts/common.sh@344 -- # : 1 00:04:52.894 15:03:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.894 15:03:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.894 15:03:22 -- scripts/common.sh@364 -- # decimal 1 00:04:52.894 15:03:22 -- scripts/common.sh@352 -- # local d=1 00:04:52.894 15:03:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.894 15:03:22 -- scripts/common.sh@354 -- # echo 1 00:04:52.894 15:03:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.894 15:03:22 -- scripts/common.sh@365 -- # decimal 2 00:04:52.894 15:03:22 -- scripts/common.sh@352 -- # local d=2 00:04:52.894 15:03:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.894 15:03:22 -- scripts/common.sh@354 -- # echo 2 00:04:52.894 15:03:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.894 15:03:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.894 15:03:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.894 15:03:22 -- scripts/common.sh@367 -- # return 0 00:04:52.894 15:03:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.894 15:03:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.894 --rc genhtml_branch_coverage=1 00:04:52.894 --rc genhtml_function_coverage=1 00:04:52.894 --rc genhtml_legend=1 00:04:52.894 --rc geninfo_all_blocks=1 00:04:52.894 --rc geninfo_unexecuted_blocks=1 00:04:52.894 00:04:52.894 ' 00:04:52.894 15:03:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.894 --rc genhtml_branch_coverage=1 00:04:52.894 --rc genhtml_function_coverage=1 00:04:52.894 --rc genhtml_legend=1 00:04:52.894 --rc geninfo_all_blocks=1 00:04:52.894 --rc geninfo_unexecuted_blocks=1 00:04:52.894 00:04:52.894 ' 00:04:52.894 15:03:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.894 --rc genhtml_branch_coverage=1 00:04:52.894 --rc genhtml_function_coverage=1 00:04:52.894 --rc genhtml_legend=1 00:04:52.894 --rc geninfo_all_blocks=1 00:04:52.894 --rc geninfo_unexecuted_blocks=1 00:04:52.894 00:04:52.894 ' 00:04:52.894 15:03:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.894 --rc genhtml_branch_coverage=1 00:04:52.894 --rc genhtml_function_coverage=1 00:04:52.894 --rc genhtml_legend=1 00:04:52.894 --rc geninfo_all_blocks=1 00:04:52.894 --rc geninfo_unexecuted_blocks=1 00:04:52.894 00:04:52.894 ' 00:04:52.894 15:03:22 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.894 15:03:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:52.894 15:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.894 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 ************************************ 00:04:52.894 START TEST thread_poller_perf 00:04:52.894 ************************************ 00:04:52.894 15:03:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.894 [2024-11-06 15:03:22.109233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.894 [2024-11-06 15:03:22.109353] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55937 ] 00:04:53.153 [2024-11-06 15:03:22.248525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.153 [2024-11-06 15:03:22.303237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.153 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:54.528 [2024-11-06T15:03:23.803Z] ====================================== 00:04:54.528 [2024-11-06T15:03:23.803Z] busy:2209371348 (cyc) 00:04:54.528 [2024-11-06T15:03:23.803Z] total_run_count: 318000 00:04:54.528 [2024-11-06T15:03:23.803Z] tsc_hz: 2200000000 (cyc) 00:04:54.528 [2024-11-06T15:03:23.803Z] ====================================== 00:04:54.528 [2024-11-06T15:03:23.803Z] poller_cost: 6947 (cyc), 3157 (nsec) 00:04:54.528 00:04:54.528 real 0m1.316s 00:04:54.528 user 0m1.169s 00:04:54.528 sys 0m0.040s 00:04:54.528 15:03:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.528 ************************************ 00:04:54.528 END TEST thread_poller_perf 00:04:54.528 ************************************ 00:04:54.528 15:03:23 -- common/autotest_common.sh@10 -- # set +x 00:04:54.529 15:03:23 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.529 15:03:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:54.529 15:03:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.529 15:03:23 -- common/autotest_common.sh@10 -- # set +x 00:04:54.529 ************************************ 00:04:54.529 START TEST thread_poller_perf 00:04:54.529 ************************************ 00:04:54.529 15:03:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.529 [2024-11-06 15:03:23.484562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.529 [2024-11-06 15:03:23.484713] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55967 ] 00:04:54.529 [2024-11-06 15:03:23.621592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.529 [2024-11-06 15:03:23.679502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.529 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:55.906 [2024-11-06T15:03:25.181Z] ====================================== 00:04:55.906 [2024-11-06T15:03:25.181Z] busy:2202697357 (cyc) 00:04:55.906 [2024-11-06T15:03:25.181Z] total_run_count: 4401000 00:04:55.906 [2024-11-06T15:03:25.181Z] tsc_hz: 2200000000 (cyc) 00:04:55.906 [2024-11-06T15:03:25.182Z] ====================================== 00:04:55.907 [2024-11-06T15:03:25.182Z] poller_cost: 500 (cyc), 227 (nsec) 00:04:55.907 00:04:55.907 real 0m1.314s 00:04:55.907 user 0m1.158s 00:04:55.907 sys 0m0.049s 00:04:55.907 15:03:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.907 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:55.907 ************************************ 00:04:55.907 END TEST thread_poller_perf 00:04:55.907 ************************************ 00:04:55.907 15:03:24 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:55.907 00:04:55.907 real 0m2.923s 00:04:55.907 user 0m2.490s 00:04:55.907 sys 0m0.217s 00:04:55.907 15:03:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.907 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:55.907 ************************************ 00:04:55.907 END TEST thread 00:04:55.907 ************************************ 00:04:55.907 15:03:24 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:55.907 15:03:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.907 15:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.907 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:55.907 ************************************ 00:04:55.907 START TEST accel 00:04:55.907 ************************************ 00:04:55.907 15:03:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:55.907 * Looking for test storage... 00:04:55.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:55.907 15:03:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:55.907 15:03:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:55.907 15:03:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:55.907 15:03:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:55.907 15:03:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:55.907 15:03:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:55.907 15:03:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:55.907 15:03:25 -- scripts/common.sh@335 -- # IFS=.-: 00:04:55.907 15:03:25 -- scripts/common.sh@335 -- # read -ra ver1 00:04:55.907 15:03:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.907 15:03:25 -- scripts/common.sh@336 -- # read -ra ver2 00:04:55.907 15:03:25 -- scripts/common.sh@337 -- # local 'op=<' 00:04:55.907 15:03:25 -- scripts/common.sh@339 -- # ver1_l=2 00:04:55.907 15:03:25 -- scripts/common.sh@340 -- # ver2_l=1 00:04:55.907 15:03:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:55.907 15:03:25 -- scripts/common.sh@343 -- # case "$op" in 00:04:55.907 15:03:25 -- scripts/common.sh@344 -- # : 1 00:04:55.907 15:03:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:55.907 15:03:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.907 15:03:25 -- scripts/common.sh@364 -- # decimal 1 00:04:55.907 15:03:25 -- scripts/common.sh@352 -- # local d=1 00:04:55.907 15:03:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.907 15:03:25 -- scripts/common.sh@354 -- # echo 1 00:04:55.907 15:03:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:55.907 15:03:25 -- scripts/common.sh@365 -- # decimal 2 00:04:55.907 15:03:25 -- scripts/common.sh@352 -- # local d=2 00:04:55.907 15:03:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.907 15:03:25 -- scripts/common.sh@354 -- # echo 2 00:04:55.907 15:03:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:55.907 15:03:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:55.907 15:03:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:55.907 15:03:25 -- scripts/common.sh@367 -- # return 0 00:04:55.907 15:03:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.907 15:03:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.907 --rc genhtml_branch_coverage=1 00:04:55.907 --rc genhtml_function_coverage=1 00:04:55.907 --rc genhtml_legend=1 00:04:55.907 --rc geninfo_all_blocks=1 00:04:55.907 --rc geninfo_unexecuted_blocks=1 00:04:55.907 00:04:55.907 ' 00:04:55.907 15:03:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.907 --rc genhtml_branch_coverage=1 00:04:55.907 --rc genhtml_function_coverage=1 00:04:55.907 --rc genhtml_legend=1 00:04:55.907 --rc geninfo_all_blocks=1 00:04:55.907 --rc geninfo_unexecuted_blocks=1 00:04:55.907 00:04:55.907 ' 00:04:55.907 15:03:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.907 --rc genhtml_branch_coverage=1 00:04:55.907 --rc genhtml_function_coverage=1 00:04:55.907 --rc genhtml_legend=1 00:04:55.907 --rc geninfo_all_blocks=1 00:04:55.907 --rc geninfo_unexecuted_blocks=1 00:04:55.907 00:04:55.907 ' 00:04:55.907 15:03:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.907 --rc genhtml_branch_coverage=1 00:04:55.907 --rc genhtml_function_coverage=1 00:04:55.907 --rc genhtml_legend=1 00:04:55.907 --rc geninfo_all_blocks=1 00:04:55.907 --rc geninfo_unexecuted_blocks=1 00:04:55.907 00:04:55.907 ' 00:04:55.907 15:03:25 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:04:55.907 15:03:25 -- accel/accel.sh@74 -- # get_expected_opcs 00:04:55.907 15:03:25 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.907 15:03:25 -- accel/accel.sh@59 -- # spdk_tgt_pid=56054 00:04:55.907 15:03:25 -- accel/accel.sh@60 -- # waitforlisten 56054 00:04:55.907 15:03:25 -- common/autotest_common.sh@829 -- # '[' -z 56054 ']' 00:04:55.907 15:03:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.907 15:03:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.907 15:03:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.907 15:03:25 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:55.907 15:03:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.907 15:03:25 -- common/autotest_common.sh@10 -- # set +x 00:04:55.907 15:03:25 -- accel/accel.sh@58 -- # build_accel_config 00:04:55.907 15:03:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:55.907 15:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.907 15:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.907 15:03:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:55.907 15:03:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:55.907 15:03:25 -- accel/accel.sh@41 -- # local IFS=, 00:04:55.907 15:03:25 -- accel/accel.sh@42 -- # jq -r . 00:04:55.907 [2024-11-06 15:03:25.117930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:55.907 [2024-11-06 15:03:25.118042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56054 ] 00:04:56.167 [2024-11-06 15:03:25.256555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.167 [2024-11-06 15:03:25.311597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.167 [2024-11-06 15:03:25.311851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.104 15:03:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.104 15:03:26 -- common/autotest_common.sh@862 -- # return 0 00:04:57.104 15:03:26 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:57.104 15:03:26 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:57.104 15:03:26 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:04:57.104 15:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.104 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.104 15:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.104 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.104 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.104 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.105 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.105 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.105 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.105 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.105 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.105 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.105 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.105 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.105 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.105 15:03:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.105 15:03:26 -- accel/accel.sh@64 -- # IFS== 00:04:57.105 15:03:26 -- accel/accel.sh@64 -- # read -r opc module 00:04:57.105 15:03:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:57.105 15:03:26 -- accel/accel.sh@67 -- # killprocess 56054 00:04:57.105 15:03:26 -- common/autotest_common.sh@936 -- # '[' -z 56054 ']' 00:04:57.105 15:03:26 -- common/autotest_common.sh@940 -- # kill -0 56054 00:04:57.105 15:03:26 -- common/autotest_common.sh@941 -- # uname 00:04:57.105 15:03:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.105 15:03:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56054 00:04:57.105 killing process with pid 56054 00:04:57.105 15:03:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.105 15:03:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.105 15:03:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56054' 00:04:57.105 15:03:26 -- common/autotest_common.sh@955 -- # kill 56054 00:04:57.105 15:03:26 -- common/autotest_common.sh@960 -- # wait 56054 00:04:57.365 15:03:26 -- accel/accel.sh@68 -- # trap - ERR 00:04:57.365 15:03:26 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:04:57.365 15:03:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:57.365 15:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.365 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.365 15:03:26 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:04:57.365 15:03:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:57.365 15:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.365 15:03:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.365 15:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.365 15:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.365 15:03:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.365 15:03:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.365 15:03:26 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.365 15:03:26 -- accel/accel.sh@42 -- # jq -r . 00:04:57.365 15:03:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.365 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.365 15:03:26 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:57.365 15:03:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:57.365 15:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.365 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.365 ************************************ 00:04:57.365 START TEST accel_missing_filename 00:04:57.365 ************************************ 00:04:57.365 15:03:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:04:57.365 15:03:26 -- common/autotest_common.sh@650 -- # local es=0 00:04:57.365 15:03:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:57.365 15:03:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:57.365 15:03:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.365 15:03:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:57.365 15:03:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.365 15:03:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:04:57.365 15:03:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:57.365 15:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.365 15:03:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.365 15:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.365 15:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.365 15:03:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.365 15:03:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.365 15:03:26 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.365 15:03:26 -- accel/accel.sh@42 -- # jq -r . 00:04:57.365 [2024-11-06 15:03:26.619845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.365 [2024-11-06 15:03:26.619946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56100 ] 00:04:57.625 [2024-11-06 15:03:26.750508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.625 [2024-11-06 15:03:26.809968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.625 [2024-11-06 15:03:26.843016] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.625 [2024-11-06 15:03:26.884515] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:57.884 A filename is required. 00:04:57.884 15:03:26 -- common/autotest_common.sh@653 -- # es=234 00:04:57.884 15:03:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.884 15:03:26 -- common/autotest_common.sh@662 -- # es=106 00:04:57.884 15:03:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:04:57.884 15:03:26 -- common/autotest_common.sh@670 -- # es=1 00:04:57.884 15:03:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.884 ************************************ 00:04:57.884 END TEST accel_missing_filename 00:04:57.884 ************************************ 00:04:57.884 00:04:57.884 real 0m0.389s 00:04:57.884 user 0m0.254s 00:04:57.884 sys 0m0.073s 00:04:57.884 15:03:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.884 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.884 15:03:27 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:57.884 15:03:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:57.884 15:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.884 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:57.884 ************************************ 00:04:57.884 START TEST accel_compress_verify 00:04:57.884 ************************************ 00:04:57.884 15:03:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:57.884 15:03:27 -- common/autotest_common.sh@650 -- # local es=0 00:04:57.884 15:03:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:57.884 15:03:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:57.884 15:03:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.884 15:03:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:57.884 15:03:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.884 15:03:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:57.884 15:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:57.884 15:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.884 15:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.884 15:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.884 15:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.884 15:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.884 15:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.884 15:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.884 15:03:27 -- accel/accel.sh@42 -- # jq -r . 00:04:57.884 [2024-11-06 15:03:27.056981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.884 [2024-11-06 15:03:27.057077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56129 ] 00:04:58.143 [2024-11-06 15:03:27.193956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.143 [2024-11-06 15:03:27.251919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.143 [2024-11-06 15:03:27.284032] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.143 [2024-11-06 15:03:27.323906] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:58.143 00:04:58.143 Compression does not support the verify option, aborting. 00:04:58.143 15:03:27 -- common/autotest_common.sh@653 -- # es=161 00:04:58.143 15:03:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.143 15:03:27 -- common/autotest_common.sh@662 -- # es=33 00:04:58.143 15:03:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:04:58.143 15:03:27 -- common/autotest_common.sh@670 -- # es=1 00:04:58.143 15:03:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.143 00:04:58.143 real 0m0.385s 00:04:58.143 user 0m0.246s 00:04:58.143 sys 0m0.082s 00:04:58.143 15:03:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.143 ************************************ 00:04:58.143 END TEST accel_compress_verify 00:04:58.143 ************************************ 00:04:58.143 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 15:03:27 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:58.403 15:03:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:58.403 15:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.403 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 ************************************ 00:04:58.403 START TEST accel_wrong_workload 00:04:58.403 ************************************ 00:04:58.403 15:03:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:04:58.403 15:03:27 -- common/autotest_common.sh@650 -- # local es=0 00:04:58.403 15:03:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:58.403 15:03:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:58.403 15:03:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.403 15:03:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:58.403 15:03:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.403 15:03:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:04:58.403 15:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:58.403 15:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.403 15:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:58.403 15:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:58.403 15:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:04:58.403 15:03:27 -- accel/accel.sh@42 -- # jq -r . 00:04:58.403 Unsupported workload type: foobar 00:04:58.403 [2024-11-06 15:03:27.485606] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:58.403 accel_perf options: 00:04:58.403 [-h help message] 00:04:58.403 [-q queue depth per core] 00:04:58.403 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:58.403 [-T number of threads per core 00:04:58.403 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:58.403 [-t time in seconds] 00:04:58.403 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:58.403 [ dif_verify, , dif_generate, dif_generate_copy 00:04:58.403 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:58.403 [-l for compress/decompress workloads, name of uncompressed input file 00:04:58.403 [-S for crc32c workload, use this seed value (default 0) 00:04:58.403 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:58.403 [-f for fill workload, use this BYTE value (default 255) 00:04:58.403 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:58.403 [-y verify result if this switch is on] 00:04:58.403 [-a tasks to allocate per core (default: same value as -q)] 00:04:58.403 Can be used to spread operations across a wider range of memory. 00:04:58.403 15:03:27 -- common/autotest_common.sh@653 -- # es=1 00:04:58.403 15:03:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.403 15:03:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.403 15:03:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.403 00:04:58.403 real 0m0.026s 00:04:58.403 user 0m0.013s 00:04:58.403 sys 0m0.013s 00:04:58.403 15:03:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.403 ************************************ 00:04:58.403 END TEST accel_wrong_workload 00:04:58.403 ************************************ 00:04:58.403 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 15:03:27 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:58.403 15:03:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:58.403 15:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.403 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 ************************************ 00:04:58.403 START TEST accel_negative_buffers 00:04:58.403 ************************************ 00:04:58.403 15:03:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:58.403 15:03:27 -- common/autotest_common.sh@650 -- # local es=0 00:04:58.403 15:03:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:58.403 15:03:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:58.403 15:03:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.403 15:03:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:58.403 15:03:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.403 15:03:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:04:58.403 15:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:58.403 15:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.403 15:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:58.403 15:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:58.403 15:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:04:58.403 15:03:27 -- accel/accel.sh@42 -- # jq -r . 00:04:58.403 -x option must be non-negative. 00:04:58.403 [2024-11-06 15:03:27.561575] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:58.403 accel_perf options: 00:04:58.403 [-h help message] 00:04:58.403 [-q queue depth per core] 00:04:58.403 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:58.403 [-T number of threads per core 00:04:58.403 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:58.403 [-t time in seconds] 00:04:58.403 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:58.403 [ dif_verify, , dif_generate, dif_generate_copy 00:04:58.403 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:58.403 [-l for compress/decompress workloads, name of uncompressed input file 00:04:58.403 [-S for crc32c workload, use this seed value (default 0) 00:04:58.403 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:58.403 [-f for fill workload, use this BYTE value (default 255) 00:04:58.403 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:58.403 [-y verify result if this switch is on] 00:04:58.403 [-a tasks to allocate per core (default: same value as -q)] 00:04:58.403 Can be used to spread operations across a wider range of memory. 00:04:58.403 ************************************ 00:04:58.403 END TEST accel_negative_buffers 00:04:58.403 ************************************ 00:04:58.403 15:03:27 -- common/autotest_common.sh@653 -- # es=1 00:04:58.403 15:03:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.403 15:03:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.403 15:03:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.403 00:04:58.403 real 0m0.031s 00:04:58.403 user 0m0.021s 00:04:58.403 sys 0m0.009s 00:04:58.403 15:03:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.403 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 15:03:27 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:58.403 15:03:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:58.403 15:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.403 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 ************************************ 00:04:58.403 START TEST accel_crc32c 00:04:58.403 ************************************ 00:04:58.403 15:03:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:58.403 15:03:27 -- accel/accel.sh@16 -- # local accel_opc 00:04:58.403 15:03:27 -- accel/accel.sh@17 -- # local accel_module 00:04:58.403 15:03:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:58.403 15:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:58.403 15:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.403 15:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:58.403 15:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:58.403 15:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:58.403 15:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:04:58.403 15:03:27 -- accel/accel.sh@42 -- # jq -r . 00:04:58.403 [2024-11-06 15:03:27.636456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:58.404 [2024-11-06 15:03:27.636704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56183 ] 00:04:58.663 [2024-11-06 15:03:27.773310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.663 [2024-11-06 15:03:27.835858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.041 15:03:29 -- accel/accel.sh@18 -- # out=' 00:05:00.041 SPDK Configuration: 00:05:00.041 Core mask: 0x1 00:05:00.041 00:05:00.041 Accel Perf Configuration: 00:05:00.041 Workload Type: crc32c 00:05:00.041 CRC-32C seed: 32 00:05:00.041 Transfer size: 4096 bytes 00:05:00.041 Vector count 1 00:05:00.041 Module: software 00:05:00.041 Queue depth: 32 00:05:00.041 Allocate depth: 32 00:05:00.041 # threads/core: 1 00:05:00.041 Run time: 1 seconds 00:05:00.041 Verify: Yes 00:05:00.041 00:05:00.041 Running for 1 seconds... 00:05:00.041 00:05:00.041 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:00.041 ------------------------------------------------------------------------------------ 00:05:00.041 0,0 481120/s 1879 MiB/s 0 0 00:05:00.041 ==================================================================================== 00:05:00.041 Total 481120/s 1879 MiB/s 0 0' 00:05:00.041 15:03:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:00.041 15:03:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.041 15:03:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:00.041 15:03:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.041 15:03:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.041 15:03:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:00.041 15:03:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:00.041 15:03:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:00.041 15:03:29 -- accel/accel.sh@42 -- # jq -r . 00:05:00.041 [2024-11-06 15:03:29.033380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.041 [2024-11-06 15:03:29.033474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56203 ] 00:05:00.041 [2024-11-06 15:03:29.167755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.041 [2024-11-06 15:03:29.222431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val=0x1 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val=crc32c 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val=32 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val=software 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@23 -- # accel_module=software 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val=32 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.041 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.041 15:03:29 -- accel/accel.sh@21 -- # val=32 00:05:00.041 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.042 15:03:29 -- accel/accel.sh@21 -- # val=1 00:05:00.042 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.042 15:03:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:00.042 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.042 15:03:29 -- accel/accel.sh@21 -- # val=Yes 00:05:00.042 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.042 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.042 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:00.042 15:03:29 -- accel/accel.sh@21 -- # val= 00:05:00.042 15:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:00.042 15:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@21 -- # val= 00:05:01.418 15:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # IFS=: 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@21 -- # val= 00:05:01.418 15:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # IFS=: 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@21 -- # val= 00:05:01.418 15:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # IFS=: 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@21 -- # val= 00:05:01.418 15:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # IFS=: 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@21 -- # val= 00:05:01.418 15:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # IFS=: 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@21 -- # val= 00:05:01.418 15:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # IFS=: 00:05:01.418 15:03:30 -- accel/accel.sh@20 -- # read -r var val 00:05:01.418 15:03:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:01.418 15:03:30 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:01.418 15:03:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.418 00:05:01.418 real 0m2.777s 00:05:01.418 user 0m2.423s 00:05:01.418 sys 0m0.152s 00:05:01.418 15:03:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.418 15:03:30 -- common/autotest_common.sh@10 -- # set +x 00:05:01.418 ************************************ 00:05:01.418 END TEST accel_crc32c 00:05:01.418 ************************************ 00:05:01.418 15:03:30 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:01.418 15:03:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:01.418 15:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.418 15:03:30 -- common/autotest_common.sh@10 -- # set +x 00:05:01.418 ************************************ 00:05:01.418 START TEST accel_crc32c_C2 00:05:01.418 ************************************ 00:05:01.418 15:03:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:01.418 15:03:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:01.418 15:03:30 -- accel/accel.sh@17 -- # local accel_module 00:05:01.418 15:03:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:01.418 15:03:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:01.419 15:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.419 15:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:01.419 15:03:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.419 15:03:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.419 15:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:01.419 15:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:01.419 15:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:01.419 15:03:30 -- accel/accel.sh@42 -- # jq -r . 00:05:01.419 [2024-11-06 15:03:30.473124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.419 [2024-11-06 15:03:30.473245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56237 ] 00:05:01.419 [2024-11-06 15:03:30.610376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.419 [2024-11-06 15:03:30.667109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.796 15:03:31 -- accel/accel.sh@18 -- # out=' 00:05:02.796 SPDK Configuration: 00:05:02.796 Core mask: 0x1 00:05:02.796 00:05:02.796 Accel Perf Configuration: 00:05:02.796 Workload Type: crc32c 00:05:02.796 CRC-32C seed: 0 00:05:02.796 Transfer size: 4096 bytes 00:05:02.796 Vector count 2 00:05:02.796 Module: software 00:05:02.797 Queue depth: 32 00:05:02.797 Allocate depth: 32 00:05:02.797 # threads/core: 1 00:05:02.797 Run time: 1 seconds 00:05:02.797 Verify: Yes 00:05:02.797 00:05:02.797 Running for 1 seconds... 00:05:02.797 00:05:02.797 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:02.797 ------------------------------------------------------------------------------------ 00:05:02.797 0,0 351008/s 2742 MiB/s 0 0 00:05:02.797 ==================================================================================== 00:05:02.797 Total 351008/s 1371 MiB/s 0 0' 00:05:02.797 15:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:02.797 15:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:02.797 15:03:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:02.797 15:03:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:02.797 15:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.797 15:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:02.797 15:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.797 15:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.797 15:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:02.797 15:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:02.797 15:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:02.797 15:03:31 -- accel/accel.sh@42 -- # jq -r . 00:05:02.797 [2024-11-06 15:03:31.860037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.797 [2024-11-06 15:03:31.860129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56251 ] 00:05:02.797 [2024-11-06 15:03:31.996044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.797 [2024-11-06 15:03:32.062156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=0x1 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=crc32c 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=0 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=software 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@23 -- # accel_module=software 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=32 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=32 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=1 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val=Yes 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.056 15:03:32 -- accel/accel.sh@21 -- # val= 00:05:03.056 15:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:03.056 15:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@21 -- # val= 00:05:03.993 15:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # IFS=: 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@21 -- # val= 00:05:03.993 15:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # IFS=: 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@21 -- # val= 00:05:03.993 15:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # IFS=: 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@21 -- # val= 00:05:03.993 15:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # IFS=: 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@21 -- # val= 00:05:03.993 15:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # IFS=: 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@21 -- # val= 00:05:03.993 15:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # IFS=: 00:05:03.993 15:03:33 -- accel/accel.sh@20 -- # read -r var val 00:05:03.993 15:03:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:03.993 15:03:33 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:03.993 15:03:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.993 00:05:03.993 real 0m2.786s 00:05:03.993 user 0m2.425s 00:05:03.993 sys 0m0.158s 00:05:03.993 15:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.993 15:03:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.993 ************************************ 00:05:03.993 END TEST accel_crc32c_C2 00:05:03.993 ************************************ 00:05:04.253 15:03:33 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:04.253 15:03:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:04.253 15:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.253 15:03:33 -- common/autotest_common.sh@10 -- # set +x 00:05:04.253 ************************************ 00:05:04.253 START TEST accel_copy 00:05:04.253 ************************************ 00:05:04.253 15:03:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:04.253 15:03:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:04.253 15:03:33 -- accel/accel.sh@17 -- # local accel_module 00:05:04.253 15:03:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:04.253 15:03:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:04.253 15:03:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.253 15:03:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:04.253 15:03:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.253 15:03:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.253 15:03:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:04.253 15:03:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:04.253 15:03:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:04.253 15:03:33 -- accel/accel.sh@42 -- # jq -r . 00:05:04.253 [2024-11-06 15:03:33.310684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.253 [2024-11-06 15:03:33.311572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56286 ] 00:05:04.253 [2024-11-06 15:03:33.449706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.253 [2024-11-06 15:03:33.506302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.631 15:03:34 -- accel/accel.sh@18 -- # out=' 00:05:05.631 SPDK Configuration: 00:05:05.631 Core mask: 0x1 00:05:05.631 00:05:05.631 Accel Perf Configuration: 00:05:05.631 Workload Type: copy 00:05:05.631 Transfer size: 4096 bytes 00:05:05.631 Vector count 1 00:05:05.631 Module: software 00:05:05.631 Queue depth: 32 00:05:05.631 Allocate depth: 32 00:05:05.631 # threads/core: 1 00:05:05.631 Run time: 1 seconds 00:05:05.631 Verify: Yes 00:05:05.631 00:05:05.631 Running for 1 seconds... 00:05:05.631 00:05:05.631 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:05.631 ------------------------------------------------------------------------------------ 00:05:05.631 0,0 339424/s 1325 MiB/s 0 0 00:05:05.631 ==================================================================================== 00:05:05.631 Total 339424/s 1325 MiB/s 0 0' 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:05.631 15:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.631 15:03:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:05.631 15:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.631 15:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.631 15:03:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:05.631 15:03:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:05.631 15:03:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:05.631 15:03:34 -- accel/accel.sh@42 -- # jq -r . 00:05:05.631 [2024-11-06 15:03:34.679088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.631 [2024-11-06 15:03:34.679183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56305 ] 00:05:05.631 [2024-11-06 15:03:34.814848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.631 [2024-11-06 15:03:34.865367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val=0x1 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val=copy 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.631 15:03:34 -- accel/accel.sh@21 -- # val=software 00:05:05.631 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.631 15:03:34 -- accel/accel.sh@23 -- # accel_module=software 00:05:05.631 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val=32 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val=32 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val=1 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val=Yes 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:05.891 15:03:34 -- accel/accel.sh@21 -- # val= 00:05:05.891 15:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:05.891 15:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 15:03:36 -- accel/accel.sh@21 -- # val= 00:05:06.828 15:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 15:03:36 -- accel/accel.sh@21 -- # val= 00:05:06.828 15:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 15:03:36 -- accel/accel.sh@21 -- # val= 00:05:06.828 15:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 15:03:36 -- accel/accel.sh@21 -- # val= 00:05:06.828 15:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 15:03:36 -- accel/accel.sh@21 -- # val= 00:05:06.828 15:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 15:03:36 -- accel/accel.sh@21 -- # val= 00:05:06.828 15:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:06.828 15:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:06.828 ************************************ 00:05:06.828 END TEST accel_copy 00:05:06.828 ************************************ 00:05:06.828 15:03:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:06.828 15:03:36 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:06.828 15:03:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.828 00:05:06.828 real 0m2.725s 00:05:06.828 user 0m2.372s 00:05:06.828 sys 0m0.152s 00:05:06.828 15:03:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.828 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 15:03:36 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.828 15:03:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:06.828 15:03:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.828 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 ************************************ 00:05:06.828 START TEST accel_fill 00:05:06.828 ************************************ 00:05:06.828 15:03:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.828 15:03:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:06.828 15:03:36 -- accel/accel.sh@17 -- # local accel_module 00:05:06.828 15:03:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.828 15:03:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.828 15:03:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.828 15:03:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.828 15:03:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:03:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:03:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:03:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.828 15:03:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.828 15:03:36 -- accel/accel.sh@42 -- # jq -r . 00:05:06.828 [2024-11-06 15:03:36.089378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.828 [2024-11-06 15:03:36.089472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56334 ] 00:05:07.088 [2024-11-06 15:03:36.224652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.088 [2024-11-06 15:03:36.284885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.469 15:03:37 -- accel/accel.sh@18 -- # out=' 00:05:08.469 SPDK Configuration: 00:05:08.469 Core mask: 0x1 00:05:08.469 00:05:08.469 Accel Perf Configuration: 00:05:08.469 Workload Type: fill 00:05:08.469 Fill pattern: 0x80 00:05:08.469 Transfer size: 4096 bytes 00:05:08.469 Vector count 1 00:05:08.469 Module: software 00:05:08.469 Queue depth: 64 00:05:08.469 Allocate depth: 64 00:05:08.469 # threads/core: 1 00:05:08.469 Run time: 1 seconds 00:05:08.469 Verify: Yes 00:05:08.469 00:05:08.469 Running for 1 seconds... 00:05:08.469 00:05:08.470 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:08.470 ------------------------------------------------------------------------------------ 00:05:08.470 0,0 499328/s 1950 MiB/s 0 0 00:05:08.470 ==================================================================================== 00:05:08.470 Total 499328/s 1950 MiB/s 0 0' 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:08.470 15:03:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:08.470 15:03:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.470 15:03:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.470 15:03:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.470 15:03:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.470 15:03:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.470 15:03:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.470 15:03:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.470 15:03:37 -- accel/accel.sh@42 -- # jq -r . 00:05:08.470 [2024-11-06 15:03:37.456144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.470 [2024-11-06 15:03:37.456237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56354 ] 00:05:08.470 [2024-11-06 15:03:37.593225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.470 [2024-11-06 15:03:37.663531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=0x1 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=fill 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=0x80 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=software 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@23 -- # accel_module=software 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=64 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=64 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=1 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val=Yes 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:08.470 15:03:37 -- accel/accel.sh@21 -- # val= 00:05:08.470 15:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:08.470 15:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@21 -- # val= 00:05:09.850 15:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@21 -- # val= 00:05:09.850 15:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@21 -- # val= 00:05:09.850 15:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@21 -- # val= 00:05:09.850 15:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@21 -- # val= 00:05:09.850 15:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@21 -- # val= 00:05:09.850 15:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:09.850 15:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:09.850 15:03:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:09.850 ************************************ 00:05:09.850 END TEST accel_fill 00:05:09.850 ************************************ 00:05:09.850 15:03:38 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:09.850 15:03:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.850 00:05:09.850 real 0m2.776s 00:05:09.850 user 0m2.429s 00:05:09.850 sys 0m0.144s 00:05:09.850 15:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.850 15:03:38 -- common/autotest_common.sh@10 -- # set +x 00:05:09.850 15:03:38 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:09.850 15:03:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:09.850 15:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.850 15:03:38 -- common/autotest_common.sh@10 -- # set +x 00:05:09.850 ************************************ 00:05:09.850 START TEST accel_copy_crc32c 00:05:09.850 ************************************ 00:05:09.850 15:03:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:09.850 15:03:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.850 15:03:38 -- accel/accel.sh@17 -- # local accel_module 00:05:09.850 15:03:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:09.850 15:03:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:09.850 15:03:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.850 15:03:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.850 15:03:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.850 15:03:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.850 15:03:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.850 15:03:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.850 15:03:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.850 15:03:38 -- accel/accel.sh@42 -- # jq -r . 00:05:09.850 [2024-11-06 15:03:38.923627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.850 [2024-11-06 15:03:38.923806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56388 ] 00:05:09.850 [2024-11-06 15:03:39.069402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.850 [2024-11-06 15:03:39.118292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.228 15:03:40 -- accel/accel.sh@18 -- # out=' 00:05:11.228 SPDK Configuration: 00:05:11.228 Core mask: 0x1 00:05:11.228 00:05:11.228 Accel Perf Configuration: 00:05:11.228 Workload Type: copy_crc32c 00:05:11.228 CRC-32C seed: 0 00:05:11.228 Vector size: 4096 bytes 00:05:11.228 Transfer size: 4096 bytes 00:05:11.228 Vector count 1 00:05:11.228 Module: software 00:05:11.228 Queue depth: 32 00:05:11.228 Allocate depth: 32 00:05:11.228 # threads/core: 1 00:05:11.228 Run time: 1 seconds 00:05:11.228 Verify: Yes 00:05:11.228 00:05:11.228 Running for 1 seconds... 00:05:11.228 00:05:11.228 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:11.228 ------------------------------------------------------------------------------------ 00:05:11.228 0,0 294912/s 1152 MiB/s 0 0 00:05:11.228 ==================================================================================== 00:05:11.228 Total 294912/s 1152 MiB/s 0 0' 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:11.228 15:03:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:11.228 15:03:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.228 15:03:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.228 15:03:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.228 15:03:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.228 15:03:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.228 15:03:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.228 15:03:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.228 15:03:40 -- accel/accel.sh@42 -- # jq -r . 00:05:11.228 [2024-11-06 15:03:40.285469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.228 [2024-11-06 15:03:40.285553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56408 ] 00:05:11.228 [2024-11-06 15:03:40.413538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.228 [2024-11-06 15:03:40.459938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=0x1 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=0 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=software 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@23 -- # accel_module=software 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=32 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=32 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=1 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val=Yes 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:11.228 15:03:40 -- accel/accel.sh@21 -- # val= 00:05:11.228 15:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # IFS=: 00:05:11.228 15:03:40 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@21 -- # val= 00:05:12.606 15:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@21 -- # val= 00:05:12.606 15:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@21 -- # val= 00:05:12.606 15:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@21 -- # val= 00:05:12.606 15:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@21 -- # val= 00:05:12.606 15:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@21 -- # val= 00:05:12.606 15:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:12.606 15:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:12.606 15:03:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:12.606 15:03:41 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:12.606 ************************************ 00:05:12.606 END TEST accel_copy_crc32c 00:05:12.606 ************************************ 00:05:12.606 15:03:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.606 00:05:12.606 real 0m2.710s 00:05:12.606 user 0m2.359s 00:05:12.606 sys 0m0.147s 00:05:12.606 15:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.606 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:05:12.606 15:03:41 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:12.606 15:03:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:12.606 15:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.606 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:05:12.606 ************************************ 00:05:12.606 START TEST accel_copy_crc32c_C2 00:05:12.606 ************************************ 00:05:12.606 15:03:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:12.606 15:03:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.606 15:03:41 -- accel/accel.sh@17 -- # local accel_module 00:05:12.606 15:03:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:12.606 15:03:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:12.606 15:03:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.606 15:03:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.606 15:03:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.606 15:03:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.606 15:03:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.606 15:03:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.606 15:03:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.606 15:03:41 -- accel/accel.sh@42 -- # jq -r . 00:05:12.606 [2024-11-06 15:03:41.670955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.606 [2024-11-06 15:03:41.671035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56437 ] 00:05:12.606 [2024-11-06 15:03:41.799433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.606 [2024-11-06 15:03:41.849517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.984 15:03:43 -- accel/accel.sh@18 -- # out=' 00:05:13.984 SPDK Configuration: 00:05:13.984 Core mask: 0x1 00:05:13.984 00:05:13.984 Accel Perf Configuration: 00:05:13.984 Workload Type: copy_crc32c 00:05:13.984 CRC-32C seed: 0 00:05:13.984 Vector size: 4096 bytes 00:05:13.984 Transfer size: 8192 bytes 00:05:13.984 Vector count 2 00:05:13.984 Module: software 00:05:13.984 Queue depth: 32 00:05:13.984 Allocate depth: 32 00:05:13.984 # threads/core: 1 00:05:13.984 Run time: 1 seconds 00:05:13.984 Verify: Yes 00:05:13.984 00:05:13.984 Running for 1 seconds... 00:05:13.984 00:05:13.984 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:13.984 ------------------------------------------------------------------------------------ 00:05:13.984 0,0 194016/s 1515 MiB/s 0 0 00:05:13.984 ==================================================================================== 00:05:13.984 Total 194016/s 757 MiB/s 0 0' 00:05:13.984 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:13.984 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:13.984 15:03:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:13.984 15:03:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:13.984 15:03:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.984 15:03:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.984 15:03:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.984 15:03:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.984 15:03:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.984 15:03:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.984 15:03:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.984 15:03:43 -- accel/accel.sh@42 -- # jq -r . 00:05:13.984 [2024-11-06 15:03:43.044720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.984 [2024-11-06 15:03:43.045097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56456 ] 00:05:13.984 [2024-11-06 15:03:43.178159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.984 [2024-11-06 15:03:43.228615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=0x1 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=0 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=software 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@23 -- # accel_module=software 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=32 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=32 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=1 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val=Yes 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.243 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:14.243 15:03:43 -- accel/accel.sh@21 -- # val= 00:05:14.243 15:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.244 15:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:14.244 15:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:15.189 15:03:44 -- accel/accel.sh@21 -- # val= 00:05:15.189 15:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # IFS=: 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # read -r var val 00:05:15.190 15:03:44 -- accel/accel.sh@21 -- # val= 00:05:15.190 15:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # IFS=: 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # read -r var val 00:05:15.190 15:03:44 -- accel/accel.sh@21 -- # val= 00:05:15.190 15:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # IFS=: 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # read -r var val 00:05:15.190 15:03:44 -- accel/accel.sh@21 -- # val= 00:05:15.190 15:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # IFS=: 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # read -r var val 00:05:15.190 15:03:44 -- accel/accel.sh@21 -- # val= 00:05:15.190 15:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # IFS=: 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # read -r var val 00:05:15.190 15:03:44 -- accel/accel.sh@21 -- # val= 00:05:15.190 15:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # IFS=: 00:05:15.190 15:03:44 -- accel/accel.sh@20 -- # read -r var val 00:05:15.190 15:03:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:15.190 15:03:44 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:15.190 15:03:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.190 00:05:15.190 real 0m2.751s 00:05:15.190 user 0m2.396s 00:05:15.190 sys 0m0.148s 00:05:15.190 15:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.190 ************************************ 00:05:15.190 END TEST accel_copy_crc32c_C2 00:05:15.190 ************************************ 00:05:15.190 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.190 15:03:44 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:15.190 15:03:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:15.190 15:03:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.190 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.190 ************************************ 00:05:15.190 START TEST accel_dualcast 00:05:15.190 ************************************ 00:05:15.190 15:03:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:05:15.190 15:03:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.190 15:03:44 -- accel/accel.sh@17 -- # local accel_module 00:05:15.190 15:03:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:15.190 15:03:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:15.190 15:03:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.190 15:03:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.465 15:03:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.465 15:03:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.465 15:03:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.465 15:03:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.465 15:03:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.465 15:03:44 -- accel/accel.sh@42 -- # jq -r . 00:05:15.465 [2024-11-06 15:03:44.479489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.465 [2024-11-06 15:03:44.479586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56491 ] 00:05:15.465 [2024-11-06 15:03:44.617895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.465 [2024-11-06 15:03:44.679935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.842 15:03:45 -- accel/accel.sh@18 -- # out=' 00:05:16.842 SPDK Configuration: 00:05:16.842 Core mask: 0x1 00:05:16.842 00:05:16.842 Accel Perf Configuration: 00:05:16.842 Workload Type: dualcast 00:05:16.842 Transfer size: 4096 bytes 00:05:16.842 Vector count 1 00:05:16.842 Module: software 00:05:16.842 Queue depth: 32 00:05:16.842 Allocate depth: 32 00:05:16.842 # threads/core: 1 00:05:16.842 Run time: 1 seconds 00:05:16.842 Verify: Yes 00:05:16.842 00:05:16.842 Running for 1 seconds... 00:05:16.842 00:05:16.842 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:16.842 ------------------------------------------------------------------------------------ 00:05:16.842 0,0 361216/s 1411 MiB/s 0 0 00:05:16.842 ==================================================================================== 00:05:16.842 Total 361216/s 1411 MiB/s 0 0' 00:05:16.842 15:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:16.842 15:03:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.842 15:03:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:16.842 15:03:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:16.842 15:03:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.842 15:03:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.842 15:03:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:16.842 15:03:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:16.842 15:03:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:16.842 15:03:45 -- accel/accel.sh@42 -- # jq -r . 00:05:16.842 [2024-11-06 15:03:45.872222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.842 [2024-11-06 15:03:45.872337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56505 ] 00:05:16.842 [2024-11-06 15:03:46.005716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.842 [2024-11-06 15:03:46.058156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val=0x1 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val=dualcast 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.842 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.842 15:03:46 -- accel/accel.sh@21 -- # val=software 00:05:16.842 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val=32 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val=32 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val=1 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val=Yes 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:16.843 15:03:46 -- accel/accel.sh@21 -- # val= 00:05:16.843 15:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:16.843 15:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@21 -- # val= 00:05:18.219 15:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # IFS=: 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@21 -- # val= 00:05:18.219 15:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # IFS=: 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@21 -- # val= 00:05:18.219 15:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # IFS=: 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@21 -- # val= 00:05:18.219 15:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # IFS=: 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@21 -- # val= 00:05:18.219 15:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # IFS=: 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@21 -- # val= 00:05:18.219 15:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # IFS=: 00:05:18.219 15:03:47 -- accel/accel.sh@20 -- # read -r var val 00:05:18.219 15:03:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:18.219 15:03:47 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:18.219 15:03:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.219 00:05:18.219 real 0m2.769s 00:05:18.219 user 0m2.408s 00:05:18.219 sys 0m0.153s 00:05:18.219 ************************************ 00:05:18.219 END TEST accel_dualcast 00:05:18.219 ************************************ 00:05:18.219 15:03:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.219 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.219 15:03:47 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:18.219 15:03:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:18.219 15:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.219 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.219 ************************************ 00:05:18.219 START TEST accel_compare 00:05:18.219 ************************************ 00:05:18.219 15:03:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:18.219 15:03:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.219 15:03:47 -- accel/accel.sh@17 -- # local accel_module 00:05:18.219 15:03:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:18.219 15:03:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:18.219 15:03:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.219 15:03:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.219 15:03:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.219 15:03:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.219 15:03:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.219 15:03:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.220 15:03:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.220 15:03:47 -- accel/accel.sh@42 -- # jq -r . 00:05:18.220 [2024-11-06 15:03:47.299247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.220 [2024-11-06 15:03:47.299342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56539 ] 00:05:18.220 [2024-11-06 15:03:47.436565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.478 [2024-11-06 15:03:47.501022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.428 15:03:48 -- accel/accel.sh@18 -- # out=' 00:05:19.428 SPDK Configuration: 00:05:19.428 Core mask: 0x1 00:05:19.428 00:05:19.428 Accel Perf Configuration: 00:05:19.428 Workload Type: compare 00:05:19.428 Transfer size: 4096 bytes 00:05:19.428 Vector count 1 00:05:19.428 Module: software 00:05:19.428 Queue depth: 32 00:05:19.428 Allocate depth: 32 00:05:19.428 # threads/core: 1 00:05:19.428 Run time: 1 seconds 00:05:19.428 Verify: Yes 00:05:19.428 00:05:19.428 Running for 1 seconds... 00:05:19.428 00:05:19.428 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:19.428 ------------------------------------------------------------------------------------ 00:05:19.428 0,0 481408/s 1880 MiB/s 0 0 00:05:19.428 ==================================================================================== 00:05:19.428 Total 481408/s 1880 MiB/s 0 0' 00:05:19.428 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.428 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.428 15:03:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:19.428 15:03:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:19.428 15:03:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.428 15:03:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.428 15:03:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.428 15:03:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.428 15:03:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.428 15:03:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.428 15:03:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.428 15:03:48 -- accel/accel.sh@42 -- # jq -r . 00:05:19.428 [2024-11-06 15:03:48.701849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.428 [2024-11-06 15:03:48.701987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56559 ] 00:05:19.687 [2024-11-06 15:03:48.838665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.688 [2024-11-06 15:03:48.898836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=0x1 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=compare 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=software 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@23 -- # accel_module=software 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=32 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=32 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=1 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val=Yes 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 15:03:48 -- accel/accel.sh@21 -- # val= 00:05:19.688 15:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 15:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@21 -- # val= 00:05:21.071 15:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@21 -- # val= 00:05:21.071 15:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@21 -- # val= 00:05:21.071 15:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@21 -- # val= 00:05:21.071 15:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@21 -- # val= 00:05:21.071 15:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@21 -- # val= 00:05:21.071 15:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:21.071 15:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:21.071 15:03:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:21.071 15:03:50 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:21.071 15:03:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.071 00:05:21.071 real 0m2.788s 00:05:21.071 user 0m2.424s 00:05:21.071 sys 0m0.160s 00:05:21.071 ************************************ 00:05:21.071 END TEST accel_compare 00:05:21.071 ************************************ 00:05:21.071 15:03:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.071 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:21.071 15:03:50 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:21.071 15:03:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:21.071 15:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.071 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:21.071 ************************************ 00:05:21.071 START TEST accel_xor 00:05:21.071 ************************************ 00:05:21.071 15:03:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:21.071 15:03:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.071 15:03:50 -- accel/accel.sh@17 -- # local accel_module 00:05:21.071 15:03:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:21.071 15:03:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:21.071 15:03:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.071 15:03:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.071 15:03:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.071 15:03:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.071 15:03:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.071 15:03:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.071 15:03:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.071 15:03:50 -- accel/accel.sh@42 -- # jq -r . 00:05:21.071 [2024-11-06 15:03:50.134618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.071 [2024-11-06 15:03:50.134755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56593 ] 00:05:21.071 [2024-11-06 15:03:50.269410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.071 [2024-11-06 15:03:50.318169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.448 15:03:51 -- accel/accel.sh@18 -- # out=' 00:05:22.448 SPDK Configuration: 00:05:22.448 Core mask: 0x1 00:05:22.448 00:05:22.448 Accel Perf Configuration: 00:05:22.448 Workload Type: xor 00:05:22.448 Source buffers: 2 00:05:22.448 Transfer size: 4096 bytes 00:05:22.448 Vector count 1 00:05:22.448 Module: software 00:05:22.448 Queue depth: 32 00:05:22.448 Allocate depth: 32 00:05:22.448 # threads/core: 1 00:05:22.448 Run time: 1 seconds 00:05:22.448 Verify: Yes 00:05:22.448 00:05:22.448 Running for 1 seconds... 00:05:22.448 00:05:22.448 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:22.448 ------------------------------------------------------------------------------------ 00:05:22.448 0,0 280928/s 1097 MiB/s 0 0 00:05:22.448 ==================================================================================== 00:05:22.448 Total 280928/s 1097 MiB/s 0 0' 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:22.448 15:03:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:22.448 15:03:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.448 15:03:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.448 15:03:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.448 15:03:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.448 15:03:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.448 15:03:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.448 15:03:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.448 15:03:51 -- accel/accel.sh@42 -- # jq -r . 00:05:22.448 [2024-11-06 15:03:51.490613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.448 [2024-11-06 15:03:51.490747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56613 ] 00:05:22.448 [2024-11-06 15:03:51.622059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.448 [2024-11-06 15:03:51.669450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=0x1 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=xor 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=2 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=software 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@23 -- # accel_module=software 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=32 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=32 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=1 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.448 15:03:51 -- accel/accel.sh@21 -- # val=Yes 00:05:22.448 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.448 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.449 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.449 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.449 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.449 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:22.449 15:03:51 -- accel/accel.sh@21 -- # val= 00:05:22.449 15:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.449 15:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:22.449 15:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@21 -- # val= 00:05:23.825 15:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@21 -- # val= 00:05:23.825 15:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@21 -- # val= 00:05:23.825 15:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@21 -- # val= 00:05:23.825 15:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@21 -- # val= 00:05:23.825 15:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@21 -- # val= 00:05:23.825 15:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:23.825 15:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:23.825 15:03:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:23.825 15:03:52 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:23.825 15:03:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.825 ************************************ 00:05:23.825 END TEST accel_xor 00:05:23.825 ************************************ 00:05:23.825 00:05:23.825 real 0m2.716s 00:05:23.825 user 0m2.380s 00:05:23.825 sys 0m0.132s 00:05:23.825 15:03:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.825 15:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:23.825 15:03:52 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:23.825 15:03:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:23.825 15:03:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.825 15:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:23.825 ************************************ 00:05:23.825 START TEST accel_xor 00:05:23.825 ************************************ 00:05:23.825 15:03:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:05:23.825 15:03:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.825 15:03:52 -- accel/accel.sh@17 -- # local accel_module 00:05:23.825 15:03:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:23.825 15:03:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:23.825 15:03:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.825 15:03:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.825 15:03:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.825 15:03:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.825 15:03:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.825 15:03:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.825 15:03:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.825 15:03:52 -- accel/accel.sh@42 -- # jq -r . 00:05:23.825 [2024-11-06 15:03:52.902867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.825 [2024-11-06 15:03:52.903156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56642 ] 00:05:23.825 [2024-11-06 15:03:53.031155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.825 [2024-11-06 15:03:53.078446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.201 15:03:54 -- accel/accel.sh@18 -- # out=' 00:05:25.201 SPDK Configuration: 00:05:25.201 Core mask: 0x1 00:05:25.201 00:05:25.201 Accel Perf Configuration: 00:05:25.201 Workload Type: xor 00:05:25.201 Source buffers: 3 00:05:25.201 Transfer size: 4096 bytes 00:05:25.201 Vector count 1 00:05:25.201 Module: software 00:05:25.201 Queue depth: 32 00:05:25.201 Allocate depth: 32 00:05:25.201 # threads/core: 1 00:05:25.201 Run time: 1 seconds 00:05:25.201 Verify: Yes 00:05:25.201 00:05:25.201 Running for 1 seconds... 00:05:25.201 00:05:25.201 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:25.201 ------------------------------------------------------------------------------------ 00:05:25.201 0,0 256640/s 1002 MiB/s 0 0 00:05:25.201 ==================================================================================== 00:05:25.201 Total 256640/s 1002 MiB/s 0 0' 00:05:25.201 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:25.202 15:03:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:25.202 15:03:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.202 15:03:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.202 15:03:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.202 15:03:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.202 15:03:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.202 15:03:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.202 15:03:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.202 15:03:54 -- accel/accel.sh@42 -- # jq -r . 00:05:25.202 [2024-11-06 15:03:54.251750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.202 [2024-11-06 15:03:54.251842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56656 ] 00:05:25.202 [2024-11-06 15:03:54.388060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.202 [2024-11-06 15:03:54.436244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=0x1 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=xor 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=3 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=software 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@23 -- # accel_module=software 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=32 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=32 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.202 15:03:54 -- accel/accel.sh@21 -- # val=1 00:05:25.202 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.202 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.461 15:03:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:25.461 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.461 15:03:54 -- accel/accel.sh@21 -- # val=Yes 00:05:25.461 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.461 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.461 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:25.461 15:03:54 -- accel/accel.sh@21 -- # val= 00:05:25.461 15:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:25.461 15:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:26.396 15:03:55 -- accel/accel.sh@21 -- # val= 00:05:26.396 15:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.396 15:03:55 -- accel/accel.sh@20 -- # IFS=: 00:05:26.396 15:03:55 -- accel/accel.sh@20 -- # read -r var val 00:05:26.396 15:03:55 -- accel/accel.sh@21 -- # val= 00:05:26.396 15:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.396 15:03:55 -- accel/accel.sh@20 -- # IFS=: 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # read -r var val 00:05:26.397 15:03:55 -- accel/accel.sh@21 -- # val= 00:05:26.397 15:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # IFS=: 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # read -r var val 00:05:26.397 15:03:55 -- accel/accel.sh@21 -- # val= 00:05:26.397 15:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # IFS=: 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # read -r var val 00:05:26.397 15:03:55 -- accel/accel.sh@21 -- # val= 00:05:26.397 15:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # IFS=: 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # read -r var val 00:05:26.397 15:03:55 -- accel/accel.sh@21 -- # val= 00:05:26.397 15:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # IFS=: 00:05:26.397 15:03:55 -- accel/accel.sh@20 -- # read -r var val 00:05:26.397 15:03:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:26.397 15:03:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:26.397 15:03:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.397 00:05:26.397 real 0m2.726s 00:05:26.397 user 0m2.388s 00:05:26.397 sys 0m0.136s 00:05:26.397 15:03:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.397 ************************************ 00:05:26.397 END TEST accel_xor 00:05:26.397 ************************************ 00:05:26.397 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.397 15:03:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:26.397 15:03:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:26.397 15:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.397 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.397 ************************************ 00:05:26.397 START TEST accel_dif_verify 00:05:26.397 ************************************ 00:05:26.397 15:03:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:05:26.397 15:03:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.397 15:03:55 -- accel/accel.sh@17 -- # local accel_module 00:05:26.397 15:03:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:26.397 15:03:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:26.397 15:03:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.397 15:03:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.397 15:03:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.397 15:03:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.397 15:03:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.397 15:03:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.397 15:03:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.397 15:03:55 -- accel/accel.sh@42 -- # jq -r . 00:05:26.656 [2024-11-06 15:03:55.678028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.656 [2024-11-06 15:03:55.678118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56696 ] 00:05:26.656 [2024-11-06 15:03:55.813152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.656 [2024-11-06 15:03:55.865130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.032 15:03:57 -- accel/accel.sh@18 -- # out=' 00:05:28.032 SPDK Configuration: 00:05:28.032 Core mask: 0x1 00:05:28.032 00:05:28.032 Accel Perf Configuration: 00:05:28.032 Workload Type: dif_verify 00:05:28.032 Vector size: 4096 bytes 00:05:28.032 Transfer size: 4096 bytes 00:05:28.032 Block size: 512 bytes 00:05:28.032 Metadata size: 8 bytes 00:05:28.032 Vector count 1 00:05:28.032 Module: software 00:05:28.032 Queue depth: 32 00:05:28.032 Allocate depth: 32 00:05:28.032 # threads/core: 1 00:05:28.032 Run time: 1 seconds 00:05:28.032 Verify: No 00:05:28.032 00:05:28.032 Running for 1 seconds... 00:05:28.032 00:05:28.032 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:28.032 ------------------------------------------------------------------------------------ 00:05:28.032 0,0 116608/s 462 MiB/s 0 0 00:05:28.032 ==================================================================================== 00:05:28.032 Total 116608/s 455 MiB/s 0 0' 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:28.032 15:03:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:28.032 15:03:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.032 15:03:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.032 15:03:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.032 15:03:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.032 15:03:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.032 15:03:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.032 15:03:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.032 15:03:57 -- accel/accel.sh@42 -- # jq -r . 00:05:28.032 [2024-11-06 15:03:57.047282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.032 [2024-11-06 15:03:57.047374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56710 ] 00:05:28.032 [2024-11-06 15:03:57.183960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.032 [2024-11-06 15:03:57.233475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=0x1 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=dif_verify 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=software 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=32 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=32 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=1 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val=No 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.032 15:03:57 -- accel/accel.sh@21 -- # val= 00:05:28.032 15:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:28.032 15:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@21 -- # val= 00:05:29.409 15:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # IFS=: 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@21 -- # val= 00:05:29.409 15:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # IFS=: 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@21 -- # val= 00:05:29.409 15:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # IFS=: 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@21 -- # val= 00:05:29.409 15:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # IFS=: 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@21 -- # val= 00:05:29.409 15:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # IFS=: 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@21 -- # val= 00:05:29.409 15:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # IFS=: 00:05:29.409 15:03:58 -- accel/accel.sh@20 -- # read -r var val 00:05:29.409 15:03:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:29.409 15:03:58 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:29.409 15:03:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.409 00:05:29.409 real 0m2.729s 00:05:29.409 user 0m2.387s 00:05:29.409 sys 0m0.141s 00:05:29.409 15:03:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.409 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:05:29.409 ************************************ 00:05:29.409 END TEST accel_dif_verify 00:05:29.409 ************************************ 00:05:29.409 15:03:58 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:29.409 15:03:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:29.409 15:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.409 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:05:29.409 ************************************ 00:05:29.409 START TEST accel_dif_generate 00:05:29.409 ************************************ 00:05:29.409 15:03:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:05:29.409 15:03:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.409 15:03:58 -- accel/accel.sh@17 -- # local accel_module 00:05:29.409 15:03:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:29.409 15:03:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:29.409 15:03:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.409 15:03:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.409 15:03:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.409 15:03:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.409 15:03:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.409 15:03:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.409 15:03:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.409 15:03:58 -- accel/accel.sh@42 -- # jq -r . 00:05:29.409 [2024-11-06 15:03:58.454990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.409 [2024-11-06 15:03:58.455090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56739 ] 00:05:29.409 [2024-11-06 15:03:58.585774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.409 [2024-11-06 15:03:58.637234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.785 15:03:59 -- accel/accel.sh@18 -- # out=' 00:05:30.785 SPDK Configuration: 00:05:30.785 Core mask: 0x1 00:05:30.785 00:05:30.785 Accel Perf Configuration: 00:05:30.785 Workload Type: dif_generate 00:05:30.785 Vector size: 4096 bytes 00:05:30.785 Transfer size: 4096 bytes 00:05:30.785 Block size: 512 bytes 00:05:30.785 Metadata size: 8 bytes 00:05:30.785 Vector count 1 00:05:30.785 Module: software 00:05:30.785 Queue depth: 32 00:05:30.785 Allocate depth: 32 00:05:30.785 # threads/core: 1 00:05:30.785 Run time: 1 seconds 00:05:30.785 Verify: No 00:05:30.785 00:05:30.785 Running for 1 seconds... 00:05:30.785 00:05:30.785 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:30.785 ------------------------------------------------------------------------------------ 00:05:30.785 0,0 140000/s 555 MiB/s 0 0 00:05:30.785 ==================================================================================== 00:05:30.785 Total 140000/s 546 MiB/s 0 0' 00:05:30.785 15:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:03:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:30.785 15:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:03:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:30.785 15:03:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.785 15:03:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.785 15:03:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.785 15:03:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.785 15:03:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.785 15:03:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.785 15:03:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.785 15:03:59 -- accel/accel.sh@42 -- # jq -r . 00:05:30.785 [2024-11-06 15:03:59.816813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.785 [2024-11-06 15:03:59.817083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56764 ] 00:05:30.785 [2024-11-06 15:03:59.952811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.785 [2024-11-06 15:04:00.002100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val=0x1 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val=dif_generate 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val=software 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@23 -- # accel_module=software 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val=32 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.785 15:04:00 -- accel/accel.sh@21 -- # val=32 00:05:30.785 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.785 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.786 15:04:00 -- accel/accel.sh@21 -- # val=1 00:05:30.786 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.786 15:04:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:30.786 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.786 15:04:00 -- accel/accel.sh@21 -- # val=No 00:05:30.786 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.786 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.786 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:30.786 15:04:00 -- accel/accel.sh@21 -- # val= 00:05:30.786 15:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:30.786 15:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@21 -- # val= 00:05:32.160 15:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@21 -- # val= 00:05:32.160 15:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@21 -- # val= 00:05:32.160 15:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@21 -- # val= 00:05:32.160 ************************************ 00:05:32.160 END TEST accel_dif_generate 00:05:32.160 ************************************ 00:05:32.160 15:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@21 -- # val= 00:05:32.160 15:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@21 -- # val= 00:05:32.160 15:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:32.160 15:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.160 15:04:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:32.160 15:04:01 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:32.160 15:04:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.160 00:05:32.160 real 0m2.741s 00:05:32.160 user 0m2.398s 00:05:32.160 sys 0m0.140s 00:05:32.160 15:04:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.160 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.160 15:04:01 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:32.160 15:04:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:32.160 15:04:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.160 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.160 ************************************ 00:05:32.160 START TEST accel_dif_generate_copy 00:05:32.161 ************************************ 00:05:32.161 15:04:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:05:32.161 15:04:01 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.161 15:04:01 -- accel/accel.sh@17 -- # local accel_module 00:05:32.161 15:04:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:32.161 15:04:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:32.161 15:04:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.161 15:04:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.161 15:04:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.161 15:04:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.161 15:04:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.161 15:04:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.161 15:04:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.161 15:04:01 -- accel/accel.sh@42 -- # jq -r . 00:05:32.161 [2024-11-06 15:04:01.253594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.161 [2024-11-06 15:04:01.253698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56793 ] 00:05:32.161 [2024-11-06 15:04:01.392008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.419 [2024-11-06 15:04:01.441697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.355 15:04:02 -- accel/accel.sh@18 -- # out=' 00:05:33.355 SPDK Configuration: 00:05:33.355 Core mask: 0x1 00:05:33.355 00:05:33.355 Accel Perf Configuration: 00:05:33.355 Workload Type: dif_generate_copy 00:05:33.355 Vector size: 4096 bytes 00:05:33.355 Transfer size: 4096 bytes 00:05:33.355 Vector count 1 00:05:33.355 Module: software 00:05:33.355 Queue depth: 32 00:05:33.355 Allocate depth: 32 00:05:33.355 # threads/core: 1 00:05:33.355 Run time: 1 seconds 00:05:33.355 Verify: No 00:05:33.355 00:05:33.355 Running for 1 seconds... 00:05:33.355 00:05:33.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:33.355 ------------------------------------------------------------------------------------ 00:05:33.355 0,0 108064/s 428 MiB/s 0 0 00:05:33.355 ==================================================================================== 00:05:33.355 Total 108064/s 422 MiB/s 0 0' 00:05:33.355 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.355 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.355 15:04:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:33.355 15:04:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:33.355 15:04:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.355 15:04:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.355 15:04:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.355 15:04:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.355 15:04:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.355 15:04:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.355 15:04:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.355 15:04:02 -- accel/accel.sh@42 -- # jq -r . 00:05:33.355 [2024-11-06 15:04:02.616122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.355 [2024-11-06 15:04:02.616211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56807 ] 00:05:33.614 [2024-11-06 15:04:02.755108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.614 [2024-11-06 15:04:02.820629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=0x1 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=software 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@23 -- # accel_module=software 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=32 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=32 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=1 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val=No 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:33.614 15:04:02 -- accel/accel.sh@21 -- # val= 00:05:33.614 15:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:33.614 15:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:34.989 15:04:03 -- accel/accel.sh@21 -- # val= 00:05:34.989 15:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:34.990 15:04:03 -- accel/accel.sh@21 -- # val= 00:05:34.990 15:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:34.990 15:04:03 -- accel/accel.sh@21 -- # val= 00:05:34.990 15:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:34.990 15:04:03 -- accel/accel.sh@21 -- # val= 00:05:34.990 15:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:34.990 15:04:03 -- accel/accel.sh@21 -- # val= 00:05:34.990 15:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:34.990 15:04:03 -- accel/accel.sh@21 -- # val= 00:05:34.990 15:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:34.990 15:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:34.990 ************************************ 00:05:34.990 END TEST accel_dif_generate_copy 00:05:34.990 ************************************ 00:05:34.990 15:04:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:34.990 15:04:03 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:34.990 15:04:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.990 00:05:34.990 real 0m2.756s 00:05:34.990 user 0m2.402s 00:05:34.990 sys 0m0.146s 00:05:34.990 15:04:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.990 15:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:34.990 15:04:04 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:34.990 15:04:04 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:34.990 15:04:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:34.990 15:04:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.990 15:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:34.990 ************************************ 00:05:34.990 START TEST accel_comp 00:05:34.990 ************************************ 00:05:34.990 15:04:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:34.990 15:04:04 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.990 15:04:04 -- accel/accel.sh@17 -- # local accel_module 00:05:34.990 15:04:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:34.990 15:04:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.990 15:04:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:34.990 15:04:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.990 15:04:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.990 15:04:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.990 15:04:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.990 15:04:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.990 15:04:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.990 15:04:04 -- accel/accel.sh@42 -- # jq -r . 00:05:34.990 [2024-11-06 15:04:04.058208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.990 [2024-11-06 15:04:04.058297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56847 ] 00:05:34.990 [2024-11-06 15:04:04.198362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.248 [2024-11-06 15:04:04.268329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.224 15:04:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:36.224 00:05:36.224 SPDK Configuration: 00:05:36.224 Core mask: 0x1 00:05:36.224 00:05:36.224 Accel Perf Configuration: 00:05:36.224 Workload Type: compress 00:05:36.224 Transfer size: 4096 bytes 00:05:36.224 Vector count 1 00:05:36.224 Module: software 00:05:36.224 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:36.224 Queue depth: 32 00:05:36.224 Allocate depth: 32 00:05:36.224 # threads/core: 1 00:05:36.224 Run time: 1 seconds 00:05:36.224 Verify: No 00:05:36.224 00:05:36.224 Running for 1 seconds... 00:05:36.224 00:05:36.224 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:36.224 ------------------------------------------------------------------------------------ 00:05:36.224 0,0 52704/s 219 MiB/s 0 0 00:05:36.224 ==================================================================================== 00:05:36.224 Total 52704/s 205 MiB/s 0 0' 00:05:36.224 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.224 15:04:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:36.224 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.224 15:04:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:36.224 15:04:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.224 15:04:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.224 15:04:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.224 15:04:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.224 15:04:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.224 15:04:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.224 15:04:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.224 15:04:05 -- accel/accel.sh@42 -- # jq -r . 00:05:36.224 [2024-11-06 15:04:05.456268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.224 [2024-11-06 15:04:05.456525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56861 ] 00:05:36.483 [2024-11-06 15:04:05.590517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.483 [2024-11-06 15:04:05.639526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=0x1 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=compress 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=software 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@23 -- # accel_module=software 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=32 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=32 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=1 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val=No 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.483 15:04:05 -- accel/accel.sh@21 -- # val= 00:05:36.483 15:04:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # IFS=: 00:05:36.483 15:04:05 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@21 -- # val= 00:05:37.858 15:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@21 -- # val= 00:05:37.858 15:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@21 -- # val= 00:05:37.858 15:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@21 -- # val= 00:05:37.858 15:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@21 -- # val= 00:05:37.858 15:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@21 -- # val= 00:05:37.858 15:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:37.858 15:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:37.858 15:04:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:37.858 ************************************ 00:05:37.858 END TEST accel_comp 00:05:37.858 ************************************ 00:05:37.858 15:04:06 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:37.858 15:04:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.858 00:05:37.858 real 0m2.778s 00:05:37.858 user 0m2.422s 00:05:37.858 sys 0m0.154s 00:05:37.858 15:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.858 15:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:37.858 15:04:06 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.858 15:04:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:37.858 15:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.858 15:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:37.858 ************************************ 00:05:37.858 START TEST accel_decomp 00:05:37.858 ************************************ 00:05:37.859 15:04:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.859 15:04:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.859 15:04:06 -- accel/accel.sh@17 -- # local accel_module 00:05:37.859 15:04:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.859 15:04:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.859 15:04:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.859 15:04:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.859 15:04:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.859 15:04:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.859 15:04:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.859 15:04:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.859 15:04:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.859 15:04:06 -- accel/accel.sh@42 -- # jq -r . 00:05:37.859 [2024-11-06 15:04:06.886962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.859 [2024-11-06 15:04:06.887049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56896 ] 00:05:37.859 [2024-11-06 15:04:07.022791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.859 [2024-11-06 15:04:07.071590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.233 15:04:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:39.233 00:05:39.233 SPDK Configuration: 00:05:39.233 Core mask: 0x1 00:05:39.233 00:05:39.233 Accel Perf Configuration: 00:05:39.233 Workload Type: decompress 00:05:39.233 Transfer size: 4096 bytes 00:05:39.233 Vector count 1 00:05:39.233 Module: software 00:05:39.233 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:39.233 Queue depth: 32 00:05:39.233 Allocate depth: 32 00:05:39.233 # threads/core: 1 00:05:39.233 Run time: 1 seconds 00:05:39.233 Verify: Yes 00:05:39.233 00:05:39.233 Running for 1 seconds... 00:05:39.233 00:05:39.233 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:39.233 ------------------------------------------------------------------------------------ 00:05:39.233 0,0 79488/s 146 MiB/s 0 0 00:05:39.233 ==================================================================================== 00:05:39.233 Total 79488/s 310 MiB/s 0 0' 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.233 15:04:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.233 15:04:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.233 15:04:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.233 15:04:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.233 15:04:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.233 15:04:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.233 15:04:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.233 15:04:08 -- accel/accel.sh@42 -- # jq -r . 00:05:39.233 [2024-11-06 15:04:08.255985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.233 [2024-11-06 15:04:08.256607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56915 ] 00:05:39.233 [2024-11-06 15:04:08.391380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.233 [2024-11-06 15:04:08.442369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val=0x1 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val=decompress 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:39.233 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.233 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.233 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val=software 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@23 -- # accel_module=software 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val=32 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val=32 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val=1 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val=Yes 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:39.234 15:04:08 -- accel/accel.sh@21 -- # val= 00:05:39.234 15:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:39.234 15:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@21 -- # val= 00:05:40.608 15:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # IFS=: 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@21 -- # val= 00:05:40.608 15:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # IFS=: 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@21 -- # val= 00:05:40.608 15:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # IFS=: 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@21 -- # val= 00:05:40.608 15:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # IFS=: 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@21 -- # val= 00:05:40.608 15:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # IFS=: 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@21 -- # val= 00:05:40.608 15:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # IFS=: 00:05:40.608 15:04:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.608 15:04:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.608 15:04:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:40.608 15:04:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.608 00:05:40.608 real 0m2.738s 00:05:40.608 user 0m2.394s 00:05:40.608 sys 0m0.141s 00:05:40.608 ************************************ 00:05:40.608 15:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.608 15:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:40.608 END TEST accel_decomp 00:05:40.608 ************************************ 00:05:40.608 15:04:09 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:40.608 15:04:09 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:40.608 15:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.608 15:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:40.608 ************************************ 00:05:40.608 START TEST accel_decmop_full 00:05:40.608 ************************************ 00:05:40.608 15:04:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:40.608 15:04:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.608 15:04:09 -- accel/accel.sh@17 -- # local accel_module 00:05:40.608 15:04:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:40.608 15:04:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:40.608 15:04:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.608 15:04:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.608 15:04:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.608 15:04:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.608 15:04:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.608 15:04:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.608 15:04:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.608 15:04:09 -- accel/accel.sh@42 -- # jq -r . 00:05:40.608 [2024-11-06 15:04:09.673374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.608 [2024-11-06 15:04:09.673462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56944 ] 00:05:40.608 [2024-11-06 15:04:09.808177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.608 [2024-11-06 15:04:09.860868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.984 15:04:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:41.984 00:05:41.984 SPDK Configuration: 00:05:41.984 Core mask: 0x1 00:05:41.984 00:05:41.984 Accel Perf Configuration: 00:05:41.984 Workload Type: decompress 00:05:41.984 Transfer size: 111250 bytes 00:05:41.984 Vector count 1 00:05:41.984 Module: software 00:05:41.984 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:41.984 Queue depth: 32 00:05:41.984 Allocate depth: 32 00:05:41.984 # threads/core: 1 00:05:41.984 Run time: 1 seconds 00:05:41.984 Verify: Yes 00:05:41.984 00:05:41.984 Running for 1 seconds... 00:05:41.984 00:05:41.984 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:41.984 ------------------------------------------------------------------------------------ 00:05:41.984 0,0 5216/s 215 MiB/s 0 0 00:05:41.984 ==================================================================================== 00:05:41.984 Total 5216/s 553 MiB/s 0 0' 00:05:41.984 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.984 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.984 15:04:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:41.984 15:04:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:41.984 15:04:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.984 15:04:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.984 15:04:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.984 15:04:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.984 15:04:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.984 15:04:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.984 15:04:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.984 15:04:11 -- accel/accel.sh@42 -- # jq -r . 00:05:41.984 [2024-11-06 15:04:11.040409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.984 [2024-11-06 15:04:11.040475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56964 ] 00:05:41.984 [2024-11-06 15:04:11.170888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.984 [2024-11-06 15:04:11.219555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val=0x1 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val=decompress 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:41.985 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:41.985 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:41.985 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:42.243 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.243 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.243 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.243 15:04:11 -- accel/accel.sh@21 -- # val=software 00:05:42.243 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.243 15:04:11 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.243 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.243 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.243 15:04:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val=32 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val=32 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val=1 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val=Yes 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:42.244 15:04:11 -- accel/accel.sh@21 -- # val= 00:05:42.244 15:04:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # IFS=: 00:05:42.244 15:04:11 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@21 -- # val= 00:05:43.179 15:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # IFS=: 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@21 -- # val= 00:05:43.179 15:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # IFS=: 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@21 -- # val= 00:05:43.179 15:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # IFS=: 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@21 -- # val= 00:05:43.179 15:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # IFS=: 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@21 -- # val= 00:05:43.179 15:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # IFS=: 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@21 -- # val= 00:05:43.179 15:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # IFS=: 00:05:43.179 15:04:12 -- accel/accel.sh@20 -- # read -r var val 00:05:43.179 15:04:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.179 15:04:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:43.179 15:04:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.179 00:05:43.179 real 0m2.733s 00:05:43.179 user 0m2.402s 00:05:43.179 sys 0m0.130s 00:05:43.179 15:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.179 ************************************ 00:05:43.179 END TEST accel_decmop_full 00:05:43.179 ************************************ 00:05:43.179 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:43.179 15:04:12 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.179 15:04:12 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:43.179 15:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.179 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:43.179 ************************************ 00:05:43.179 START TEST accel_decomp_mcore 00:05:43.179 ************************************ 00:05:43.179 15:04:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.179 15:04:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.179 15:04:12 -- accel/accel.sh@17 -- # local accel_module 00:05:43.179 15:04:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.179 15:04:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.179 15:04:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.179 15:04:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.179 15:04:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.179 15:04:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.179 15:04:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.179 15:04:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.179 15:04:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.179 15:04:12 -- accel/accel.sh@42 -- # jq -r . 00:05:43.438 [2024-11-06 15:04:12.457099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.438 [2024-11-06 15:04:12.457194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56998 ] 00:05:43.438 [2024-11-06 15:04:12.592846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.438 [2024-11-06 15:04:12.646219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.438 [2024-11-06 15:04:12.646352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.438 [2024-11-06 15:04:12.646464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.438 [2024-11-06 15:04:12.646465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.817 15:04:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:44.817 00:05:44.817 SPDK Configuration: 00:05:44.817 Core mask: 0xf 00:05:44.817 00:05:44.817 Accel Perf Configuration: 00:05:44.817 Workload Type: decompress 00:05:44.817 Transfer size: 4096 bytes 00:05:44.817 Vector count 1 00:05:44.817 Module: software 00:05:44.818 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.818 Queue depth: 32 00:05:44.818 Allocate depth: 32 00:05:44.818 # threads/core: 1 00:05:44.818 Run time: 1 seconds 00:05:44.818 Verify: Yes 00:05:44.818 00:05:44.818 Running for 1 seconds... 00:05:44.818 00:05:44.818 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:44.818 ------------------------------------------------------------------------------------ 00:05:44.818 0,0 63680/s 117 MiB/s 0 0 00:05:44.818 3,0 61600/s 113 MiB/s 0 0 00:05:44.818 2,0 61376/s 113 MiB/s 0 0 00:05:44.818 1,0 61184/s 112 MiB/s 0 0 00:05:44.818 ==================================================================================== 00:05:44.818 Total 247840/s 968 MiB/s 0 0' 00:05:44.818 15:04:13 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:44.818 15:04:13 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:44.818 15:04:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.818 15:04:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.818 15:04:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.818 15:04:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.818 15:04:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.818 15:04:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.818 15:04:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.818 15:04:13 -- accel/accel.sh@42 -- # jq -r . 00:05:44.818 [2024-11-06 15:04:13.835342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.818 [2024-11-06 15:04:13.835454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57015 ] 00:05:44.818 [2024-11-06 15:04:13.971666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.818 [2024-11-06 15:04:14.023479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.818 [2024-11-06 15:04:14.023605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.818 [2024-11-06 15:04:14.023707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.818 [2024-11-06 15:04:14.023986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=0xf 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=decompress 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=software 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=32 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=32 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=1 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val=Yes 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.818 15:04:14 -- accel/accel.sh@21 -- # val= 00:05:44.818 15:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.818 15:04:14 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@21 -- # val= 00:05:46.194 15:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # IFS=: 00:05:46.194 15:04:15 -- accel/accel.sh@20 -- # read -r var val 00:05:46.194 15:04:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.194 15:04:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:46.194 15:04:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.194 00:05:46.194 real 0m2.766s 00:05:46.194 user 0m8.848s 00:05:46.194 sys 0m0.169s 00:05:46.194 15:04:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.194 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.194 ************************************ 00:05:46.194 END TEST accel_decomp_mcore 00:05:46.194 ************************************ 00:05:46.194 15:04:15 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:46.194 15:04:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:46.194 15:04:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.194 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.194 ************************************ 00:05:46.194 START TEST accel_decomp_full_mcore 00:05:46.194 ************************************ 00:05:46.194 15:04:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:46.194 15:04:15 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.194 15:04:15 -- accel/accel.sh@17 -- # local accel_module 00:05:46.194 15:04:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:46.194 15:04:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:46.194 15:04:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.194 15:04:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.194 15:04:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.194 15:04:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.194 15:04:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.194 15:04:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.194 15:04:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.194 15:04:15 -- accel/accel.sh@42 -- # jq -r . 00:05:46.194 [2024-11-06 15:04:15.271567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.194 [2024-11-06 15:04:15.271647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57053 ] 00:05:46.194 [2024-11-06 15:04:15.403658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.194 [2024-11-06 15:04:15.456751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.194 [2024-11-06 15:04:15.456866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.194 [2024-11-06 15:04:15.457005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.194 [2024-11-06 15:04:15.457214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.570 15:04:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:47.570 00:05:47.570 SPDK Configuration: 00:05:47.570 Core mask: 0xf 00:05:47.570 00:05:47.570 Accel Perf Configuration: 00:05:47.570 Workload Type: decompress 00:05:47.570 Transfer size: 111250 bytes 00:05:47.570 Vector count 1 00:05:47.570 Module: software 00:05:47.570 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:47.570 Queue depth: 32 00:05:47.570 Allocate depth: 32 00:05:47.570 # threads/core: 1 00:05:47.570 Run time: 1 seconds 00:05:47.570 Verify: Yes 00:05:47.570 00:05:47.570 Running for 1 seconds... 00:05:47.570 00:05:47.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.570 ------------------------------------------------------------------------------------ 00:05:47.570 0,0 4768/s 196 MiB/s 0 0 00:05:47.570 3,0 4800/s 198 MiB/s 0 0 00:05:47.570 2,0 4800/s 198 MiB/s 0 0 00:05:47.570 1,0 4768/s 196 MiB/s 0 0 00:05:47.570 ==================================================================================== 00:05:47.570 Total 19136/s 2030 MiB/s 0 0' 00:05:47.570 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.570 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.570 15:04:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.571 15:04:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.571 15:04:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.571 15:04:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.571 15:04:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.571 15:04:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.571 15:04:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.571 15:04:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.571 15:04:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.571 15:04:16 -- accel/accel.sh@42 -- # jq -r . 00:05:47.571 [2024-11-06 15:04:16.651224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.571 [2024-11-06 15:04:16.651308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57075 ] 00:05:47.571 [2024-11-06 15:04:16.778649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.571 [2024-11-06 15:04:16.829000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.571 [2024-11-06 15:04:16.829131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.571 [2024-11-06 15:04:16.829265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.571 [2024-11-06 15:04:16.829538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.829 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.829 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.829 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.829 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.829 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.829 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.829 15:04:16 -- accel/accel.sh@21 -- # val=0xf 00:05:47.829 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.829 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.829 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.829 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.829 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.829 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=decompress 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=software 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=32 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=32 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=1 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val=Yes 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:47.830 15:04:16 -- accel/accel.sh@21 -- # val= 00:05:47.830 15:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # IFS=: 00:05:47.830 15:04:16 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@21 -- # val= 00:05:48.765 15:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # IFS=: 00:05:48.765 15:04:18 -- accel/accel.sh@20 -- # read -r var val 00:05:48.765 15:04:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.765 15:04:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:48.765 15:04:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.765 00:05:48.765 real 0m2.775s 00:05:48.765 user 0m8.955s 00:05:48.765 sys 0m0.154s 00:05:48.765 15:04:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.765 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.765 ************************************ 00:05:48.765 END TEST accel_decomp_full_mcore 00:05:48.765 ************************************ 00:05:49.024 15:04:18 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:49.024 15:04:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:49.024 15:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.024 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.024 ************************************ 00:05:49.024 START TEST accel_decomp_mthread 00:05:49.024 ************************************ 00:05:49.024 15:04:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:49.024 15:04:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.024 15:04:18 -- accel/accel.sh@17 -- # local accel_module 00:05:49.024 15:04:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:49.024 15:04:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:49.024 15:04:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.024 15:04:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.024 15:04:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.024 15:04:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.024 15:04:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.024 15:04:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.024 15:04:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.024 15:04:18 -- accel/accel.sh@42 -- # jq -r . 00:05:49.024 [2024-11-06 15:04:18.089932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.024 [2024-11-06 15:04:18.090015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57115 ] 00:05:49.024 [2024-11-06 15:04:18.213162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.024 [2024-11-06 15:04:18.261456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.399 15:04:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:50.399 00:05:50.399 SPDK Configuration: 00:05:50.399 Core mask: 0x1 00:05:50.399 00:05:50.399 Accel Perf Configuration: 00:05:50.399 Workload Type: decompress 00:05:50.399 Transfer size: 4096 bytes 00:05:50.399 Vector count 1 00:05:50.399 Module: software 00:05:50.399 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:50.399 Queue depth: 32 00:05:50.399 Allocate depth: 32 00:05:50.399 # threads/core: 2 00:05:50.399 Run time: 1 seconds 00:05:50.399 Verify: Yes 00:05:50.399 00:05:50.399 Running for 1 seconds... 00:05:50.399 00:05:50.399 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.399 ------------------------------------------------------------------------------------ 00:05:50.399 0,1 39584/s 72 MiB/s 0 0 00:05:50.399 0,0 39424/s 72 MiB/s 0 0 00:05:50.399 ==================================================================================== 00:05:50.399 Total 79008/s 308 MiB/s 0 0' 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:50.399 15:04:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.399 15:04:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.399 15:04:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.399 15:04:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.399 15:04:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.399 15:04:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.399 15:04:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.399 15:04:19 -- accel/accel.sh@42 -- # jq -r . 00:05:50.399 [2024-11-06 15:04:19.443080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.399 [2024-11-06 15:04:19.443174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57129 ] 00:05:50.399 [2024-11-06 15:04:19.578824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.399 [2024-11-06 15:04:19.630323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val=0x1 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val=decompress 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val=software 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.399 15:04:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 15:04:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:50.399 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val=32 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val=32 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val=2 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val=Yes 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:50.400 15:04:19 -- accel/accel.sh@21 -- # val= 00:05:50.400 15:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # IFS=: 00:05:50.400 15:04:19 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@21 -- # val= 00:05:51.774 15:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # IFS=: 00:05:51.774 15:04:20 -- accel/accel.sh@20 -- # read -r var val 00:05:51.774 15:04:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:51.774 15:04:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:51.774 15:04:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.774 00:05:51.774 real 0m2.725s 00:05:51.774 user 0m2.384s 00:05:51.774 sys 0m0.137s 00:05:51.774 15:04:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.774 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:51.774 ************************************ 00:05:51.774 END TEST accel_decomp_mthread 00:05:51.774 ************************************ 00:05:51.774 15:04:20 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.774 15:04:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:51.774 15:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.774 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:51.774 ************************************ 00:05:51.774 START TEST accel_deomp_full_mthread 00:05:51.774 ************************************ 00:05:51.774 15:04:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.774 15:04:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.774 15:04:20 -- accel/accel.sh@17 -- # local accel_module 00:05:51.774 15:04:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.774 15:04:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.774 15:04:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.774 15:04:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.774 15:04:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.774 15:04:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.774 15:04:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.774 15:04:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.774 15:04:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.774 15:04:20 -- accel/accel.sh@42 -- # jq -r . 00:05:51.774 [2024-11-06 15:04:20.859354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.774 [2024-11-06 15:04:20.859478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57163 ] 00:05:51.774 [2024-11-06 15:04:20.994570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.774 [2024-11-06 15:04:21.048939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.149 15:04:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:53.149 00:05:53.149 SPDK Configuration: 00:05:53.149 Core mask: 0x1 00:05:53.149 00:05:53.149 Accel Perf Configuration: 00:05:53.149 Workload Type: decompress 00:05:53.149 Transfer size: 111250 bytes 00:05:53.149 Vector count 1 00:05:53.149 Module: software 00:05:53.149 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.149 Queue depth: 32 00:05:53.149 Allocate depth: 32 00:05:53.149 # threads/core: 2 00:05:53.149 Run time: 1 seconds 00:05:53.149 Verify: Yes 00:05:53.149 00:05:53.149 Running for 1 seconds... 00:05:53.149 00:05:53.149 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.149 ------------------------------------------------------------------------------------ 00:05:53.149 0,1 2720/s 112 MiB/s 0 0 00:05:53.149 0,0 2688/s 111 MiB/s 0 0 00:05:53.149 ==================================================================================== 00:05:53.149 Total 5408/s 573 MiB/s 0 0' 00:05:53.149 15:04:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.149 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.149 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.149 15:04:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.149 15:04:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.149 15:04:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.149 15:04:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.149 15:04:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.149 15:04:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.149 15:04:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.149 15:04:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.149 15:04:22 -- accel/accel.sh@42 -- # jq -r . 00:05:53.149 [2024-11-06 15:04:22.241087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.149 [2024-11-06 15:04:22.241186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57183 ] 00:05:53.149 [2024-11-06 15:04:22.370340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.149 [2024-11-06 15:04:22.419010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=0x1 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=decompress 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=software 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=32 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=32 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=2 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val=Yes 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:53.408 15:04:22 -- accel/accel.sh@21 -- # val= 00:05:53.408 15:04:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # IFS=: 00:05:53.408 15:04:22 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@21 -- # val= 00:05:54.342 15:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # IFS=: 00:05:54.342 15:04:23 -- accel/accel.sh@20 -- # read -r var val 00:05:54.342 15:04:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:54.342 15:04:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:54.342 15:04:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.342 00:05:54.342 real 0m2.768s 00:05:54.342 user 0m2.436s 00:05:54.342 sys 0m0.132s 00:05:54.342 15:04:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.342 15:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:54.342 ************************************ 00:05:54.342 END TEST accel_deomp_full_mthread 00:05:54.342 ************************************ 00:05:54.601 15:04:23 -- accel/accel.sh@116 -- # [[ n == y ]] 00:05:54.601 15:04:23 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:54.601 15:04:23 -- accel/accel.sh@129 -- # build_accel_config 00:05:54.601 15:04:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:54.601 15:04:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.601 15:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.601 15:04:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.601 15:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:54.601 15:04:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.601 15:04:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.601 15:04:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.601 15:04:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.601 15:04:23 -- accel/accel.sh@42 -- # jq -r . 00:05:54.601 ************************************ 00:05:54.601 START TEST accel_dif_functional_tests 00:05:54.601 ************************************ 00:05:54.601 15:04:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:54.601 [2024-11-06 15:04:23.699731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.601 [2024-11-06 15:04:23.699837] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57213 ] 00:05:54.601 [2024-11-06 15:04:23.836404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.859 [2024-11-06 15:04:23.887037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.859 [2024-11-06 15:04:23.887170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.859 [2024-11-06 15:04:23.887175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.859 00:05:54.859 00:05:54.859 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.859 http://cunit.sourceforge.net/ 00:05:54.859 00:05:54.859 00:05:54.859 Suite: accel_dif 00:05:54.859 Test: verify: DIF generated, GUARD check ...passed 00:05:54.859 Test: verify: DIF generated, APPTAG check ...passed 00:05:54.859 Test: verify: DIF generated, REFTAG check ...passed 00:05:54.859 Test: verify: DIF not generated, GUARD check ...passed 00:05:54.859 Test: verify: DIF not generated, APPTAG check ...passed 00:05:54.859 Test: verify: DIF not generated, REFTAG check ...[2024-11-06 15:04:23.936139] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:54.859 [2024-11-06 15:04:23.936219] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:54.859 [2024-11-06 15:04:23.936256] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:54.859 [2024-11-06 15:04:23.936488] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:54.859 passed 00:05:54.859 Test: verify: APPTAG correct, APPTAG check ...[2024-11-06 15:04:23.936521] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:54.859 [2024-11-06 15:04:23.936548] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:54.859 passed 00:05:54.859 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:54.859 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:54.859 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:54.859 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:54.859 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-06 15:04:23.936608] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:54.859 passed 00:05:54.859 Test: generate copy: DIF generated, GUARD check ...passed 00:05:54.859 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:54.859 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:54.859 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-11-06 15:04:23.936939] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:54.859 passed 00:05:54.859 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:54.859 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:54.859 Test: generate copy: iovecs-len validate ...passed 00:05:54.859 Test: generate copy: buffer alignment validate ...passed 00:05:54.859 00:05:54.859 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.859 suites 1 1 n/a 0 0 00:05:54.859 tests 20 20 20 0 0 00:05:54.859 asserts 204 204 204 0 n/a 00:05:54.859 00:05:54.859 Elapsed time = 0.002 seconds 00:05:54.860 [2024-11-06 15:04:23.937262] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:54.860 00:05:54.860 real 0m0.452s 00:05:54.860 user 0m0.525s 00:05:54.860 sys 0m0.093s 00:05:54.860 15:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.860 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:54.860 ************************************ 00:05:54.860 END TEST accel_dif_functional_tests 00:05:54.860 ************************************ 00:05:55.118 00:05:55.118 real 0m59.271s 00:05:55.118 user 1m4.551s 00:05:55.118 sys 0m4.253s 00:05:55.118 15:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.118 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.118 ************************************ 00:05:55.118 END TEST accel 00:05:55.118 ************************************ 00:05:55.118 15:04:24 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:55.118 15:04:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.118 15:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.118 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.118 ************************************ 00:05:55.118 START TEST accel_rpc 00:05:55.118 ************************************ 00:05:55.118 15:04:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:55.118 * Looking for test storage... 00:05:55.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:55.118 15:04:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.118 15:04:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.118 15:04:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.118 15:04:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.118 15:04:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.118 15:04:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.118 15:04:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.118 15:04:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.118 15:04:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.118 15:04:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.118 15:04:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.118 15:04:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.118 15:04:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.118 15:04:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.118 15:04:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.118 15:04:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.118 15:04:24 -- scripts/common.sh@344 -- # : 1 00:05:55.118 15:04:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.118 15:04:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.118 15:04:24 -- scripts/common.sh@364 -- # decimal 1 00:05:55.118 15:04:24 -- scripts/common.sh@352 -- # local d=1 00:05:55.118 15:04:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.118 15:04:24 -- scripts/common.sh@354 -- # echo 1 00:05:55.118 15:04:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.118 15:04:24 -- scripts/common.sh@365 -- # decimal 2 00:05:55.118 15:04:24 -- scripts/common.sh@352 -- # local d=2 00:05:55.118 15:04:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.118 15:04:24 -- scripts/common.sh@354 -- # echo 2 00:05:55.118 15:04:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.118 15:04:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.118 15:04:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.118 15:04:24 -- scripts/common.sh@367 -- # return 0 00:05:55.118 15:04:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.118 15:04:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.118 --rc genhtml_branch_coverage=1 00:05:55.118 --rc genhtml_function_coverage=1 00:05:55.118 --rc genhtml_legend=1 00:05:55.118 --rc geninfo_all_blocks=1 00:05:55.118 --rc geninfo_unexecuted_blocks=1 00:05:55.118 00:05:55.118 ' 00:05:55.118 15:04:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.118 --rc genhtml_branch_coverage=1 00:05:55.118 --rc genhtml_function_coverage=1 00:05:55.118 --rc genhtml_legend=1 00:05:55.118 --rc geninfo_all_blocks=1 00:05:55.118 --rc geninfo_unexecuted_blocks=1 00:05:55.118 00:05:55.118 ' 00:05:55.118 15:04:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.118 --rc genhtml_branch_coverage=1 00:05:55.118 --rc genhtml_function_coverage=1 00:05:55.118 --rc genhtml_legend=1 00:05:55.118 --rc geninfo_all_blocks=1 00:05:55.118 --rc geninfo_unexecuted_blocks=1 00:05:55.118 00:05:55.118 ' 00:05:55.118 15:04:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.118 --rc genhtml_branch_coverage=1 00:05:55.118 --rc genhtml_function_coverage=1 00:05:55.118 --rc genhtml_legend=1 00:05:55.118 --rc geninfo_all_blocks=1 00:05:55.118 --rc geninfo_unexecuted_blocks=1 00:05:55.118 00:05:55.118 ' 00:05:55.118 15:04:24 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.118 15:04:24 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57290 00:05:55.118 15:04:24 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:55.118 15:04:24 -- accel/accel_rpc.sh@15 -- # waitforlisten 57290 00:05:55.118 15:04:24 -- common/autotest_common.sh@829 -- # '[' -z 57290 ']' 00:05:55.118 15:04:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.118 15:04:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.118 15:04:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.118 15:04:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.118 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.376 [2024-11-06 15:04:24.416174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.376 [2024-11-06 15:04:24.416282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57290 ] 00:05:55.376 [2024-11-06 15:04:24.547523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.376 [2024-11-06 15:04:24.599876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.376 [2024-11-06 15:04:24.600088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.635 15:04:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.635 15:04:24 -- common/autotest_common.sh@862 -- # return 0 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:55.635 15:04:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.635 15:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.635 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.635 ************************************ 00:05:55.635 START TEST accel_assign_opcode 00:05:55.635 ************************************ 00:05:55.635 15:04:24 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:55.635 15:04:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.635 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.635 [2024-11-06 15:04:24.684483] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:55.635 15:04:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:55.635 15:04:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.635 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.635 [2024-11-06 15:04:24.692480] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:55.635 15:04:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:55.635 15:04:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.635 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.635 15:04:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:55.635 15:04:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:55.635 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.635 15:04:24 -- accel/accel_rpc.sh@42 -- # grep software 00:05:55.635 15:04:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.635 software 00:05:55.635 00:05:55.635 real 0m0.198s 00:05:55.635 user 0m0.053s 00:05:55.635 sys 0m0.014s 00:05:55.635 15:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.635 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:55.635 ************************************ 00:05:55.635 END TEST accel_assign_opcode 00:05:55.635 ************************************ 00:05:55.894 15:04:24 -- accel/accel_rpc.sh@55 -- # killprocess 57290 00:05:55.894 15:04:24 -- common/autotest_common.sh@936 -- # '[' -z 57290 ']' 00:05:55.894 15:04:24 -- common/autotest_common.sh@940 -- # kill -0 57290 00:05:55.894 15:04:24 -- common/autotest_common.sh@941 -- # uname 00:05:55.894 15:04:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.894 15:04:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57290 00:05:55.894 15:04:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.894 15:04:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.894 killing process with pid 57290 00:05:55.894 15:04:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57290' 00:05:55.894 15:04:24 -- common/autotest_common.sh@955 -- # kill 57290 00:05:55.894 15:04:24 -- common/autotest_common.sh@960 -- # wait 57290 00:05:56.152 00:05:56.152 real 0m1.038s 00:05:56.152 user 0m1.065s 00:05:56.152 sys 0m0.308s 00:05:56.152 15:04:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.152 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.152 ************************************ 00:05:56.152 END TEST accel_rpc 00:05:56.152 ************************************ 00:05:56.152 15:04:25 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:56.152 15:04:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.152 15:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.152 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.152 ************************************ 00:05:56.152 START TEST app_cmdline 00:05:56.152 ************************************ 00:05:56.152 15:04:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:56.152 * Looking for test storage... 00:05:56.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:56.152 15:04:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.152 15:04:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.152 15:04:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.152 15:04:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.152 15:04:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.152 15:04:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.152 15:04:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.152 15:04:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.152 15:04:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.152 15:04:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.152 15:04:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.152 15:04:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.152 15:04:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.152 15:04:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.152 15:04:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.152 15:04:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.152 15:04:25 -- scripts/common.sh@344 -- # : 1 00:05:56.152 15:04:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.152 15:04:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.152 15:04:25 -- scripts/common.sh@364 -- # decimal 1 00:05:56.152 15:04:25 -- scripts/common.sh@352 -- # local d=1 00:05:56.153 15:04:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.153 15:04:25 -- scripts/common.sh@354 -- # echo 1 00:05:56.153 15:04:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.153 15:04:25 -- scripts/common.sh@365 -- # decimal 2 00:05:56.411 15:04:25 -- scripts/common.sh@352 -- # local d=2 00:05:56.411 15:04:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.411 15:04:25 -- scripts/common.sh@354 -- # echo 2 00:05:56.411 15:04:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.411 15:04:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.411 15:04:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.411 15:04:25 -- scripts/common.sh@367 -- # return 0 00:05:56.411 15:04:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.411 15:04:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.411 --rc genhtml_branch_coverage=1 00:05:56.411 --rc genhtml_function_coverage=1 00:05:56.411 --rc genhtml_legend=1 00:05:56.411 --rc geninfo_all_blocks=1 00:05:56.411 --rc geninfo_unexecuted_blocks=1 00:05:56.411 00:05:56.411 ' 00:05:56.411 15:04:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.411 --rc genhtml_branch_coverage=1 00:05:56.411 --rc genhtml_function_coverage=1 00:05:56.411 --rc genhtml_legend=1 00:05:56.411 --rc geninfo_all_blocks=1 00:05:56.411 --rc geninfo_unexecuted_blocks=1 00:05:56.411 00:05:56.411 ' 00:05:56.411 15:04:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.411 --rc genhtml_branch_coverage=1 00:05:56.411 --rc genhtml_function_coverage=1 00:05:56.411 --rc genhtml_legend=1 00:05:56.411 --rc geninfo_all_blocks=1 00:05:56.411 --rc geninfo_unexecuted_blocks=1 00:05:56.411 00:05:56.411 ' 00:05:56.411 15:04:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.411 --rc genhtml_branch_coverage=1 00:05:56.411 --rc genhtml_function_coverage=1 00:05:56.411 --rc genhtml_legend=1 00:05:56.411 --rc geninfo_all_blocks=1 00:05:56.411 --rc geninfo_unexecuted_blocks=1 00:05:56.411 00:05:56.411 ' 00:05:56.411 15:04:25 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:56.411 15:04:25 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57377 00:05:56.411 15:04:25 -- app/cmdline.sh@18 -- # waitforlisten 57377 00:05:56.411 15:04:25 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:56.411 15:04:25 -- common/autotest_common.sh@829 -- # '[' -z 57377 ']' 00:05:56.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.411 15:04:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.411 15:04:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.411 15:04:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.411 15:04:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.411 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.411 [2024-11-06 15:04:25.491690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.411 [2024-11-06 15:04:25.491816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57377 ] 00:05:56.411 [2024-11-06 15:04:25.628447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.411 [2024-11-06 15:04:25.678769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.411 [2024-11-06 15:04:25.678930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.347 15:04:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.347 15:04:26 -- common/autotest_common.sh@862 -- # return 0 00:05:57.347 15:04:26 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:57.605 { 00:05:57.605 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:05:57.605 "fields": { 00:05:57.605 "major": 24, 00:05:57.605 "minor": 1, 00:05:57.605 "patch": 1, 00:05:57.605 "suffix": "-pre", 00:05:57.605 "commit": "c13c99a5e" 00:05:57.605 } 00:05:57.605 } 00:05:57.605 15:04:26 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.605 15:04:26 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.605 15:04:26 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.606 15:04:26 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.606 15:04:26 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.606 15:04:26 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.606 15:04:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.606 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:57.606 15:04:26 -- app/cmdline.sh@26 -- # sort 00:05:57.606 15:04:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.606 15:04:26 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.606 15:04:26 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.606 15:04:26 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.606 15:04:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.606 15:04:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.606 15:04:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.606 15:04:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.606 15:04:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.606 15:04:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.606 15:04:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.606 15:04:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.606 15:04:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.606 15:04:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:57.606 15:04:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.865 request: 00:05:57.865 { 00:05:57.865 "method": "env_dpdk_get_mem_stats", 00:05:57.865 "req_id": 1 00:05:57.865 } 00:05:57.865 Got JSON-RPC error response 00:05:57.865 response: 00:05:57.865 { 00:05:57.865 "code": -32601, 00:05:57.865 "message": "Method not found" 00:05:57.865 } 00:05:57.865 15:04:27 -- common/autotest_common.sh@653 -- # es=1 00:05:57.865 15:04:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.865 15:04:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.865 15:04:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.865 15:04:27 -- app/cmdline.sh@1 -- # killprocess 57377 00:05:57.865 15:04:27 -- common/autotest_common.sh@936 -- # '[' -z 57377 ']' 00:05:57.865 15:04:27 -- common/autotest_common.sh@940 -- # kill -0 57377 00:05:57.865 15:04:27 -- common/autotest_common.sh@941 -- # uname 00:05:57.865 15:04:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.865 15:04:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57377 00:05:57.865 15:04:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.865 15:04:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.865 15:04:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57377' 00:05:57.865 killing process with pid 57377 00:05:57.865 15:04:27 -- common/autotest_common.sh@955 -- # kill 57377 00:05:57.865 15:04:27 -- common/autotest_common.sh@960 -- # wait 57377 00:05:58.123 00:05:58.123 real 0m2.047s 00:05:58.123 user 0m2.673s 00:05:58.123 sys 0m0.365s 00:05:58.123 15:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.123 ************************************ 00:05:58.123 END TEST app_cmdline 00:05:58.123 ************************************ 00:05:58.123 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 15:04:27 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.123 15:04:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.123 15:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.123 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 ************************************ 00:05:58.123 START TEST version 00:05:58.123 ************************************ 00:05:58.123 15:04:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.383 * Looking for test storage... 00:05:58.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:58.383 15:04:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.383 15:04:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.383 15:04:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.383 15:04:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.383 15:04:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.383 15:04:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.383 15:04:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.383 15:04:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.383 15:04:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.383 15:04:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.383 15:04:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.383 15:04:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.383 15:04:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.383 15:04:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.383 15:04:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.383 15:04:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.383 15:04:27 -- scripts/common.sh@344 -- # : 1 00:05:58.383 15:04:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.383 15:04:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.383 15:04:27 -- scripts/common.sh@364 -- # decimal 1 00:05:58.383 15:04:27 -- scripts/common.sh@352 -- # local d=1 00:05:58.383 15:04:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.383 15:04:27 -- scripts/common.sh@354 -- # echo 1 00:05:58.383 15:04:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.383 15:04:27 -- scripts/common.sh@365 -- # decimal 2 00:05:58.383 15:04:27 -- scripts/common.sh@352 -- # local d=2 00:05:58.383 15:04:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.383 15:04:27 -- scripts/common.sh@354 -- # echo 2 00:05:58.383 15:04:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.383 15:04:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.383 15:04:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.383 15:04:27 -- scripts/common.sh@367 -- # return 0 00:05:58.383 15:04:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.383 15:04:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.383 --rc genhtml_branch_coverage=1 00:05:58.383 --rc genhtml_function_coverage=1 00:05:58.383 --rc genhtml_legend=1 00:05:58.383 --rc geninfo_all_blocks=1 00:05:58.383 --rc geninfo_unexecuted_blocks=1 00:05:58.383 00:05:58.383 ' 00:05:58.383 15:04:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.383 --rc genhtml_branch_coverage=1 00:05:58.383 --rc genhtml_function_coverage=1 00:05:58.383 --rc genhtml_legend=1 00:05:58.383 --rc geninfo_all_blocks=1 00:05:58.383 --rc geninfo_unexecuted_blocks=1 00:05:58.383 00:05:58.383 ' 00:05:58.383 15:04:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.383 --rc genhtml_branch_coverage=1 00:05:58.383 --rc genhtml_function_coverage=1 00:05:58.383 --rc genhtml_legend=1 00:05:58.383 --rc geninfo_all_blocks=1 00:05:58.383 --rc geninfo_unexecuted_blocks=1 00:05:58.383 00:05:58.383 ' 00:05:58.383 15:04:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.383 --rc genhtml_branch_coverage=1 00:05:58.383 --rc genhtml_function_coverage=1 00:05:58.383 --rc genhtml_legend=1 00:05:58.383 --rc geninfo_all_blocks=1 00:05:58.383 --rc geninfo_unexecuted_blocks=1 00:05:58.383 00:05:58.383 ' 00:05:58.383 15:04:27 -- app/version.sh@17 -- # get_header_version major 00:05:58.383 15:04:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.383 15:04:27 -- app/version.sh@14 -- # cut -f2 00:05:58.383 15:04:27 -- app/version.sh@14 -- # tr -d '"' 00:05:58.383 15:04:27 -- app/version.sh@17 -- # major=24 00:05:58.383 15:04:27 -- app/version.sh@18 -- # get_header_version minor 00:05:58.383 15:04:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.383 15:04:27 -- app/version.sh@14 -- # cut -f2 00:05:58.383 15:04:27 -- app/version.sh@14 -- # tr -d '"' 00:05:58.383 15:04:27 -- app/version.sh@18 -- # minor=1 00:05:58.383 15:04:27 -- app/version.sh@19 -- # get_header_version patch 00:05:58.383 15:04:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.383 15:04:27 -- app/version.sh@14 -- # cut -f2 00:05:58.383 15:04:27 -- app/version.sh@14 -- # tr -d '"' 00:05:58.383 15:04:27 -- app/version.sh@19 -- # patch=1 00:05:58.383 15:04:27 -- app/version.sh@20 -- # get_header_version suffix 00:05:58.383 15:04:27 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.383 15:04:27 -- app/version.sh@14 -- # cut -f2 00:05:58.383 15:04:27 -- app/version.sh@14 -- # tr -d '"' 00:05:58.383 15:04:27 -- app/version.sh@20 -- # suffix=-pre 00:05:58.383 15:04:27 -- app/version.sh@22 -- # version=24.1 00:05:58.383 15:04:27 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.383 15:04:27 -- app/version.sh@25 -- # version=24.1.1 00:05:58.383 15:04:27 -- app/version.sh@28 -- # version=24.1.1rc0 00:05:58.383 15:04:27 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:58.383 15:04:27 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.383 15:04:27 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:05:58.383 15:04:27 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:05:58.383 00:05:58.383 real 0m0.246s 00:05:58.383 user 0m0.170s 00:05:58.383 sys 0m0.113s 00:05:58.383 15:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.384 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:58.384 ************************************ 00:05:58.384 END TEST version 00:05:58.384 ************************************ 00:05:58.643 15:04:27 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:05:58.643 15:04:27 -- spdk/autotest.sh@191 -- # uname -s 00:05:58.643 15:04:27 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:05:58.643 15:04:27 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:05:58.643 15:04:27 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:05:58.643 15:04:27 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:05:58.643 15:04:27 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:58.643 15:04:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.643 15:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.643 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 ************************************ 00:05:58.643 START TEST spdk_dd 00:05:58.643 ************************************ 00:05:58.643 15:04:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:58.643 * Looking for test storage... 00:05:58.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:58.643 15:04:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.643 15:04:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.643 15:04:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.643 15:04:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.643 15:04:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.643 15:04:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.643 15:04:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.643 15:04:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.643 15:04:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.643 15:04:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.643 15:04:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.643 15:04:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.643 15:04:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.643 15:04:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.643 15:04:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.643 15:04:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.643 15:04:27 -- scripts/common.sh@344 -- # : 1 00:05:58.643 15:04:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.643 15:04:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.643 15:04:27 -- scripts/common.sh@364 -- # decimal 1 00:05:58.643 15:04:27 -- scripts/common.sh@352 -- # local d=1 00:05:58.643 15:04:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.643 15:04:27 -- scripts/common.sh@354 -- # echo 1 00:05:58.643 15:04:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.643 15:04:27 -- scripts/common.sh@365 -- # decimal 2 00:05:58.643 15:04:27 -- scripts/common.sh@352 -- # local d=2 00:05:58.643 15:04:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.643 15:04:27 -- scripts/common.sh@354 -- # echo 2 00:05:58.643 15:04:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.643 15:04:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.643 15:04:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.643 15:04:27 -- scripts/common.sh@367 -- # return 0 00:05:58.643 15:04:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.643 15:04:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.643 --rc genhtml_branch_coverage=1 00:05:58.643 --rc genhtml_function_coverage=1 00:05:58.643 --rc genhtml_legend=1 00:05:58.643 --rc geninfo_all_blocks=1 00:05:58.643 --rc geninfo_unexecuted_blocks=1 00:05:58.643 00:05:58.643 ' 00:05:58.643 15:04:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.643 --rc genhtml_branch_coverage=1 00:05:58.643 --rc genhtml_function_coverage=1 00:05:58.643 --rc genhtml_legend=1 00:05:58.643 --rc geninfo_all_blocks=1 00:05:58.643 --rc geninfo_unexecuted_blocks=1 00:05:58.643 00:05:58.643 ' 00:05:58.643 15:04:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.643 --rc genhtml_branch_coverage=1 00:05:58.643 --rc genhtml_function_coverage=1 00:05:58.643 --rc genhtml_legend=1 00:05:58.643 --rc geninfo_all_blocks=1 00:05:58.643 --rc geninfo_unexecuted_blocks=1 00:05:58.643 00:05:58.643 ' 00:05:58.643 15:04:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.643 --rc genhtml_branch_coverage=1 00:05:58.643 --rc genhtml_function_coverage=1 00:05:58.643 --rc genhtml_legend=1 00:05:58.643 --rc geninfo_all_blocks=1 00:05:58.643 --rc geninfo_unexecuted_blocks=1 00:05:58.643 00:05:58.643 ' 00:05:58.643 15:04:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.643 15:04:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.643 15:04:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.643 15:04:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.643 15:04:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.643 15:04:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.643 15:04:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.643 15:04:27 -- paths/export.sh@5 -- # export PATH 00:05:58.643 15:04:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.644 15:04:27 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.162 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.162 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.162 15:04:28 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:59.162 15:04:28 -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:59.162 15:04:28 -- scripts/common.sh@311 -- # local bdf bdfs 00:05:59.163 15:04:28 -- scripts/common.sh@312 -- # local nvmes 00:05:59.163 15:04:28 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:05:59.163 15:04:28 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:59.163 15:04:28 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:05:59.163 15:04:28 -- scripts/common.sh@297 -- # local bdf= 00:05:59.163 15:04:28 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:05:59.163 15:04:28 -- scripts/common.sh@232 -- # local class 00:05:59.163 15:04:28 -- scripts/common.sh@233 -- # local subclass 00:05:59.163 15:04:28 -- scripts/common.sh@234 -- # local progif 00:05:59.163 15:04:28 -- scripts/common.sh@235 -- # printf %02x 1 00:05:59.163 15:04:28 -- scripts/common.sh@235 -- # class=01 00:05:59.163 15:04:28 -- scripts/common.sh@236 -- # printf %02x 8 00:05:59.163 15:04:28 -- scripts/common.sh@236 -- # subclass=08 00:05:59.163 15:04:28 -- scripts/common.sh@237 -- # printf %02x 2 00:05:59.163 15:04:28 -- scripts/common.sh@237 -- # progif=02 00:05:59.163 15:04:28 -- scripts/common.sh@239 -- # hash lspci 00:05:59.163 15:04:28 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:05:59.163 15:04:28 -- scripts/common.sh@242 -- # grep -i -- -p02 00:05:59.163 15:04:28 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:05:59.163 15:04:28 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:59.163 15:04:28 -- scripts/common.sh@244 -- # tr -d '"' 00:05:59.163 15:04:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:59.163 15:04:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:05:59.163 15:04:28 -- scripts/common.sh@15 -- # local i 00:05:59.163 15:04:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:05:59.163 15:04:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:59.163 15:04:28 -- scripts/common.sh@24 -- # return 0 00:05:59.163 15:04:28 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:05:59.163 15:04:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:59.163 15:04:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:05:59.163 15:04:28 -- scripts/common.sh@15 -- # local i 00:05:59.163 15:04:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:05:59.163 15:04:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:59.163 15:04:28 -- scripts/common.sh@24 -- # return 0 00:05:59.163 15:04:28 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:05:59.163 15:04:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:05:59.163 15:04:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:05:59.163 15:04:28 -- scripts/common.sh@322 -- # uname -s 00:05:59.163 15:04:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:05:59.163 15:04:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:05:59.163 15:04:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:05:59.163 15:04:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:05:59.163 15:04:28 -- scripts/common.sh@322 -- # uname -s 00:05:59.163 15:04:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:05:59.163 15:04:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:05:59.163 15:04:28 -- scripts/common.sh@327 -- # (( 2 )) 00:05:59.163 15:04:28 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:59.163 15:04:28 -- dd/dd.sh@13 -- # check_liburing 00:05:59.163 15:04:28 -- dd/common.sh@139 -- # local lib so 00:05:59.163 15:04:28 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:05:59.163 15:04:28 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:05:59.163 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.163 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.164 15:04:28 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:59.164 15:04:28 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:59.164 * spdk_dd linked to liburing 00:05:59.164 15:04:28 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:59.164 15:04:28 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:59.164 15:04:28 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:59.164 15:04:28 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:59.164 15:04:28 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:59.164 15:04:28 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:59.164 15:04:28 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:59.164 15:04:28 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:59.164 15:04:28 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:59.164 15:04:28 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:59.164 15:04:28 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:59.164 15:04:28 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:59.164 15:04:28 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:59.164 15:04:28 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:59.164 15:04:28 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:59.164 15:04:28 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:59.164 15:04:28 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:59.164 15:04:28 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:59.165 15:04:28 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:59.165 15:04:28 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:59.165 15:04:28 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:59.165 15:04:28 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:59.165 15:04:28 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:59.165 15:04:28 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:59.165 15:04:28 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:59.165 15:04:28 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:59.165 15:04:28 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:59.165 15:04:28 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:59.165 15:04:28 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:59.165 15:04:28 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:59.165 15:04:28 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:59.165 15:04:28 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:59.165 15:04:28 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:59.165 15:04:28 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:59.165 15:04:28 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:59.165 15:04:28 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:59.165 15:04:28 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:59.165 15:04:28 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:59.165 15:04:28 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:59.165 15:04:28 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:59.165 15:04:28 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:59.165 15:04:28 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:59.165 15:04:28 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:59.165 15:04:28 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:59.165 15:04:28 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:59.165 15:04:28 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:59.165 15:04:28 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:59.165 15:04:28 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:59.165 15:04:28 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:59.165 15:04:28 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:59.165 15:04:28 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:59.165 15:04:28 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:59.165 15:04:28 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:59.165 15:04:28 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:59.165 15:04:28 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:05:59.165 15:04:28 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:05:59.165 15:04:28 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:05:59.165 15:04:28 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:05:59.165 15:04:28 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:05:59.165 15:04:28 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:05:59.165 15:04:28 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:05:59.165 15:04:28 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:05:59.165 15:04:28 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:05:59.165 15:04:28 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:05:59.165 15:04:28 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:05:59.165 15:04:28 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:05:59.165 15:04:28 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:05:59.165 15:04:28 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:59.165 15:04:28 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:05:59.165 15:04:28 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:05:59.165 15:04:28 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:05:59.165 15:04:28 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:05:59.165 15:04:28 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:05:59.165 15:04:28 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:05:59.165 15:04:28 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:05:59.165 15:04:28 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:05:59.165 15:04:28 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:05:59.165 15:04:28 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:05:59.165 15:04:28 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:59.165 15:04:28 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:05:59.165 15:04:28 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:05:59.165 15:04:28 -- dd/common.sh@149 -- # [[ y != y ]] 00:05:59.165 15:04:28 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:05:59.165 15:04:28 -- dd/common.sh@156 -- # export liburing_in_use=1 00:05:59.165 15:04:28 -- dd/common.sh@156 -- # liburing_in_use=1 00:05:59.165 15:04:28 -- dd/common.sh@157 -- # return 0 00:05:59.165 15:04:28 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:59.165 15:04:28 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:05:59.165 15:04:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:59.165 15:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.165 15:04:28 -- common/autotest_common.sh@10 -- # set +x 00:05:59.165 ************************************ 00:05:59.165 START TEST spdk_dd_basic_rw 00:05:59.165 ************************************ 00:05:59.165 15:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:05:59.165 * Looking for test storage... 00:05:59.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:59.165 15:04:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.165 15:04:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.165 15:04:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.424 15:04:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.424 15:04:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.424 15:04:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.424 15:04:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.424 15:04:28 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.424 15:04:28 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.424 15:04:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.424 15:04:28 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.424 15:04:28 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.424 15:04:28 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.424 15:04:28 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.424 15:04:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.424 15:04:28 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.424 15:04:28 -- scripts/common.sh@344 -- # : 1 00:05:59.424 15:04:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.424 15:04:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.424 15:04:28 -- scripts/common.sh@364 -- # decimal 1 00:05:59.424 15:04:28 -- scripts/common.sh@352 -- # local d=1 00:05:59.424 15:04:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.424 15:04:28 -- scripts/common.sh@354 -- # echo 1 00:05:59.424 15:04:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.424 15:04:28 -- scripts/common.sh@365 -- # decimal 2 00:05:59.424 15:04:28 -- scripts/common.sh@352 -- # local d=2 00:05:59.424 15:04:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.424 15:04:28 -- scripts/common.sh@354 -- # echo 2 00:05:59.424 15:04:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.424 15:04:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.424 15:04:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.424 15:04:28 -- scripts/common.sh@367 -- # return 0 00:05:59.424 15:04:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.425 15:04:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.425 --rc genhtml_branch_coverage=1 00:05:59.425 --rc genhtml_function_coverage=1 00:05:59.425 --rc genhtml_legend=1 00:05:59.425 --rc geninfo_all_blocks=1 00:05:59.425 --rc geninfo_unexecuted_blocks=1 00:05:59.425 00:05:59.425 ' 00:05:59.425 15:04:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.425 --rc genhtml_branch_coverage=1 00:05:59.425 --rc genhtml_function_coverage=1 00:05:59.425 --rc genhtml_legend=1 00:05:59.425 --rc geninfo_all_blocks=1 00:05:59.425 --rc geninfo_unexecuted_blocks=1 00:05:59.425 00:05:59.425 ' 00:05:59.425 15:04:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.425 --rc genhtml_branch_coverage=1 00:05:59.425 --rc genhtml_function_coverage=1 00:05:59.425 --rc genhtml_legend=1 00:05:59.425 --rc geninfo_all_blocks=1 00:05:59.425 --rc geninfo_unexecuted_blocks=1 00:05:59.425 00:05:59.425 ' 00:05:59.425 15:04:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.425 --rc genhtml_branch_coverage=1 00:05:59.425 --rc genhtml_function_coverage=1 00:05:59.425 --rc genhtml_legend=1 00:05:59.425 --rc geninfo_all_blocks=1 00:05:59.425 --rc geninfo_unexecuted_blocks=1 00:05:59.425 00:05:59.425 ' 00:05:59.425 15:04:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.425 15:04:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.425 15:04:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.425 15:04:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.425 15:04:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.425 15:04:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.425 15:04:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.425 15:04:28 -- paths/export.sh@5 -- # export PATH 00:05:59.425 15:04:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.425 15:04:28 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:59.425 15:04:28 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:59.425 15:04:28 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:59.425 15:04:28 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:05:59.425 15:04:28 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:59.425 15:04:28 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:05:59.425 15:04:28 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:59.425 15:04:28 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:59.425 15:04:28 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.425 15:04:28 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:05:59.425 15:04:28 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:05:59.425 15:04:28 -- dd/common.sh@126 -- # mapfile -t id 00:05:59.425 15:04:28 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:05:59.686 15:04:28 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2191 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:59.686 15:04:28 -- dd/common.sh@130 -- # lbaf=04 00:05:59.687 15:04:28 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2191 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:59.687 15:04:28 -- dd/common.sh@132 -- # lbaf=4096 00:05:59.687 15:04:28 -- dd/common.sh@134 -- # echo 4096 00:05:59.687 15:04:28 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:59.687 15:04:28 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.687 15:04:28 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:59.687 15:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.687 15:04:28 -- dd/basic_rw.sh@96 -- # : 00:05:59.687 15:04:28 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 15:04:28 -- dd/basic_rw.sh@96 -- # gen_conf 00:05:59.687 15:04:28 -- dd/common.sh@31 -- # xtrace_disable 00:05:59.687 15:04:28 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 ************************************ 00:05:59.687 START TEST dd_bs_lt_native_bs 00:05:59.687 ************************************ 00:05:59.687 15:04:28 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.687 15:04:28 -- common/autotest_common.sh@650 -- # local es=0 00:05:59.687 15:04:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.687 15:04:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.687 15:04:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.687 15:04:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.687 15:04:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.687 15:04:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.687 15:04:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.687 15:04:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.687 15:04:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:59.687 15:04:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.687 { 00:05:59.687 "subsystems": [ 00:05:59.687 { 00:05:59.687 "subsystem": "bdev", 00:05:59.687 "config": [ 00:05:59.687 { 00:05:59.687 "params": { 00:05:59.687 "trtype": "pcie", 00:05:59.687 "traddr": "0000:00:06.0", 00:05:59.687 "name": "Nvme0" 00:05:59.687 }, 00:05:59.687 "method": "bdev_nvme_attach_controller" 00:05:59.687 }, 00:05:59.687 { 00:05:59.687 "method": "bdev_wait_for_examine" 00:05:59.687 } 00:05:59.687 ] 00:05:59.687 } 00:05:59.687 ] 00:05:59.687 } 00:05:59.687 [2024-11-06 15:04:28.780437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.687 [2024-11-06 15:04:28.780534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57723 ] 00:05:59.687 [2024-11-06 15:04:28.919069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.945 [2024-11-06 15:04:28.990253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.945 [2024-11-06 15:04:29.111550] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:59.945 [2024-11-06 15:04:29.111631] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.945 [2024-11-06 15:04:29.189995] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:00.204 15:04:29 -- common/autotest_common.sh@653 -- # es=234 00:06:00.204 15:04:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.204 15:04:29 -- common/autotest_common.sh@662 -- # es=106 00:06:00.204 15:04:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:00.204 15:04:29 -- common/autotest_common.sh@670 -- # es=1 00:06:00.204 15:04:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.204 00:06:00.204 real 0m0.579s 00:06:00.204 user 0m0.419s 00:06:00.204 sys 0m0.111s 00:06:00.204 15:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.204 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.204 ************************************ 00:06:00.204 END TEST dd_bs_lt_native_bs 00:06:00.204 ************************************ 00:06:00.204 15:04:29 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:00.204 15:04:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:00.204 15:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.204 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.204 ************************************ 00:06:00.204 START TEST dd_rw 00:06:00.204 ************************************ 00:06:00.204 15:04:29 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:06:00.204 15:04:29 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:00.204 15:04:29 -- dd/basic_rw.sh@12 -- # local count size 00:06:00.204 15:04:29 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:00.204 15:04:29 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:00.204 15:04:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.204 15:04:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.204 15:04:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.204 15:04:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.204 15:04:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.204 15:04:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.204 15:04:29 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:00.204 15:04:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:00.204 15:04:29 -- dd/basic_rw.sh@23 -- # count=15 00:06:00.204 15:04:29 -- dd/basic_rw.sh@24 -- # count=15 00:06:00.204 15:04:29 -- dd/basic_rw.sh@25 -- # size=61440 00:06:00.204 15:04:29 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:00.204 15:04:29 -- dd/common.sh@98 -- # xtrace_disable 00:06:00.204 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.776 15:04:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:00.776 15:04:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:00.776 15:04:29 -- dd/common.sh@31 -- # xtrace_disable 00:06:00.776 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.776 [2024-11-06 15:04:29.912750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.776 [2024-11-06 15:04:29.913319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57754 ] 00:06:00.776 { 00:06:00.776 "subsystems": [ 00:06:00.776 { 00:06:00.776 "subsystem": "bdev", 00:06:00.776 "config": [ 00:06:00.776 { 00:06:00.776 "params": { 00:06:00.776 "trtype": "pcie", 00:06:00.776 "traddr": "0000:00:06.0", 00:06:00.776 "name": "Nvme0" 00:06:00.776 }, 00:06:00.776 "method": "bdev_nvme_attach_controller" 00:06:00.776 }, 00:06:00.776 { 00:06:00.776 "method": "bdev_wait_for_examine" 00:06:00.776 } 00:06:00.776 ] 00:06:00.776 } 00:06:00.776 ] 00:06:00.776 } 00:06:00.776 [2024-11-06 15:04:30.049332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.034 [2024-11-06 15:04:30.099027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.034  [2024-11-06T15:04:30.568Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:01.293 00:06:01.293 15:04:30 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:01.293 15:04:30 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:01.293 15:04:30 -- dd/common.sh@31 -- # xtrace_disable 00:06:01.293 15:04:30 -- common/autotest_common.sh@10 -- # set +x 00:06:01.293 [2024-11-06 15:04:30.445834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.293 [2024-11-06 15:04:30.446091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57772 ] 00:06:01.293 { 00:06:01.293 "subsystems": [ 00:06:01.293 { 00:06:01.293 "subsystem": "bdev", 00:06:01.293 "config": [ 00:06:01.293 { 00:06:01.293 "params": { 00:06:01.293 "trtype": "pcie", 00:06:01.293 "traddr": "0000:00:06.0", 00:06:01.293 "name": "Nvme0" 00:06:01.293 }, 00:06:01.293 "method": "bdev_nvme_attach_controller" 00:06:01.293 }, 00:06:01.293 { 00:06:01.293 "method": "bdev_wait_for_examine" 00:06:01.293 } 00:06:01.293 ] 00:06:01.293 } 00:06:01.293 ] 00:06:01.293 } 00:06:01.552 [2024-11-06 15:04:30.585431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.552 [2024-11-06 15:04:30.637225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.552  [2024-11-06T15:04:31.085Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:01.810 00:06:01.810 15:04:30 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.810 15:04:30 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:01.810 15:04:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:01.810 15:04:30 -- dd/common.sh@11 -- # local nvme_ref= 00:06:01.810 15:04:30 -- dd/common.sh@12 -- # local size=61440 00:06:01.810 15:04:30 -- dd/common.sh@14 -- # local bs=1048576 00:06:01.810 15:04:30 -- dd/common.sh@15 -- # local count=1 00:06:01.810 15:04:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:01.810 15:04:30 -- dd/common.sh@18 -- # gen_conf 00:06:01.810 15:04:30 -- dd/common.sh@31 -- # xtrace_disable 00:06:01.810 15:04:30 -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 [2024-11-06 15:04:30.996009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.810 [2024-11-06 15:04:30.996102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:06:01.810 { 00:06:01.810 "subsystems": [ 00:06:01.810 { 00:06:01.810 "subsystem": "bdev", 00:06:01.810 "config": [ 00:06:01.810 { 00:06:01.810 "params": { 00:06:01.810 "trtype": "pcie", 00:06:01.810 "traddr": "0000:00:06.0", 00:06:01.810 "name": "Nvme0" 00:06:01.810 }, 00:06:01.810 "method": "bdev_nvme_attach_controller" 00:06:01.810 }, 00:06:01.810 { 00:06:01.810 "method": "bdev_wait_for_examine" 00:06:01.810 } 00:06:01.810 ] 00:06:01.811 } 00:06:01.811 ] 00:06:01.811 } 00:06:02.084 [2024-11-06 15:04:31.132548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.084 [2024-11-06 15:04:31.181015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.084  [2024-11-06T15:04:31.657Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:02.382 00:06:02.382 15:04:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:02.382 15:04:31 -- dd/basic_rw.sh@23 -- # count=15 00:06:02.382 15:04:31 -- dd/basic_rw.sh@24 -- # count=15 00:06:02.382 15:04:31 -- dd/basic_rw.sh@25 -- # size=61440 00:06:02.382 15:04:31 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:02.382 15:04:31 -- dd/common.sh@98 -- # xtrace_disable 00:06:02.382 15:04:31 -- common/autotest_common.sh@10 -- # set +x 00:06:02.972 15:04:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:02.972 15:04:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:02.972 15:04:31 -- dd/common.sh@31 -- # xtrace_disable 00:06:02.972 15:04:31 -- common/autotest_common.sh@10 -- # set +x 00:06:02.972 [2024-11-06 15:04:32.047521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.972 [2024-11-06 15:04:32.048309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:06:02.972 { 00:06:02.972 "subsystems": [ 00:06:02.972 { 00:06:02.972 "subsystem": "bdev", 00:06:02.972 "config": [ 00:06:02.972 { 00:06:02.972 "params": { 00:06:02.972 "trtype": "pcie", 00:06:02.972 "traddr": "0000:00:06.0", 00:06:02.972 "name": "Nvme0" 00:06:02.972 }, 00:06:02.972 "method": "bdev_nvme_attach_controller" 00:06:02.972 }, 00:06:02.972 { 00:06:02.972 "method": "bdev_wait_for_examine" 00:06:02.972 } 00:06:02.972 ] 00:06:02.972 } 00:06:02.972 ] 00:06:02.972 } 00:06:02.972 [2024-11-06 15:04:32.183543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.972 [2024-11-06 15:04:32.238251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.230  [2024-11-06T15:04:32.763Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:03.488 00:06:03.488 15:04:32 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:03.488 15:04:32 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:03.488 15:04:32 -- dd/common.sh@31 -- # xtrace_disable 00:06:03.488 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.488 [2024-11-06 15:04:32.591313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.488 [2024-11-06 15:04:32.591579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57816 ] 00:06:03.488 { 00:06:03.488 "subsystems": [ 00:06:03.488 { 00:06:03.488 "subsystem": "bdev", 00:06:03.488 "config": [ 00:06:03.488 { 00:06:03.488 "params": { 00:06:03.488 "trtype": "pcie", 00:06:03.488 "traddr": "0000:00:06.0", 00:06:03.488 "name": "Nvme0" 00:06:03.488 }, 00:06:03.488 "method": "bdev_nvme_attach_controller" 00:06:03.488 }, 00:06:03.488 { 00:06:03.488 "method": "bdev_wait_for_examine" 00:06:03.488 } 00:06:03.488 ] 00:06:03.488 } 00:06:03.488 ] 00:06:03.488 } 00:06:03.488 [2024-11-06 15:04:32.729069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.746 [2024-11-06 15:04:32.783270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.746  [2024-11-06T15:04:33.280Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:04.005 00:06:04.005 15:04:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.005 15:04:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:04.005 15:04:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:04.005 15:04:33 -- dd/common.sh@11 -- # local nvme_ref= 00:06:04.005 15:04:33 -- dd/common.sh@12 -- # local size=61440 00:06:04.005 15:04:33 -- dd/common.sh@14 -- # local bs=1048576 00:06:04.005 15:04:33 -- dd/common.sh@15 -- # local count=1 00:06:04.005 15:04:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:04.005 15:04:33 -- dd/common.sh@18 -- # gen_conf 00:06:04.005 15:04:33 -- dd/common.sh@31 -- # xtrace_disable 00:06:04.005 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:06:04.005 [2024-11-06 15:04:33.140703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.005 [2024-11-06 15:04:33.140803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57824 ] 00:06:04.005 { 00:06:04.005 "subsystems": [ 00:06:04.005 { 00:06:04.005 "subsystem": "bdev", 00:06:04.005 "config": [ 00:06:04.005 { 00:06:04.005 "params": { 00:06:04.005 "trtype": "pcie", 00:06:04.005 "traddr": "0000:00:06.0", 00:06:04.005 "name": "Nvme0" 00:06:04.005 }, 00:06:04.005 "method": "bdev_nvme_attach_controller" 00:06:04.005 }, 00:06:04.005 { 00:06:04.005 "method": "bdev_wait_for_examine" 00:06:04.005 } 00:06:04.005 ] 00:06:04.005 } 00:06:04.005 ] 00:06:04.005 } 00:06:04.005 [2024-11-06 15:04:33.275334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.264 [2024-11-06 15:04:33.329033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.264  [2024-11-06T15:04:33.797Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:04.522 00:06:04.522 15:04:33 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:04.522 15:04:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:04.522 15:04:33 -- dd/basic_rw.sh@23 -- # count=7 00:06:04.522 15:04:33 -- dd/basic_rw.sh@24 -- # count=7 00:06:04.522 15:04:33 -- dd/basic_rw.sh@25 -- # size=57344 00:06:04.522 15:04:33 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:04.522 15:04:33 -- dd/common.sh@98 -- # xtrace_disable 00:06:04.522 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:06:05.088 15:04:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:05.088 15:04:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:05.088 15:04:34 -- dd/common.sh@31 -- # xtrace_disable 00:06:05.088 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.088 { 00:06:05.088 "subsystems": [ 00:06:05.088 { 00:06:05.088 "subsystem": "bdev", 00:06:05.088 "config": [ 00:06:05.088 { 00:06:05.088 "params": { 00:06:05.088 "trtype": "pcie", 00:06:05.088 "traddr": "0000:00:06.0", 00:06:05.088 "name": "Nvme0" 00:06:05.088 }, 00:06:05.088 "method": "bdev_nvme_attach_controller" 00:06:05.088 }, 00:06:05.088 { 00:06:05.088 "method": "bdev_wait_for_examine" 00:06:05.088 } 00:06:05.088 ] 00:06:05.088 } 00:06:05.088 ] 00:06:05.088 } 00:06:05.088 [2024-11-06 15:04:34.180910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.088 [2024-11-06 15:04:34.181027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57842 ] 00:06:05.088 [2024-11-06 15:04:34.317587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.346 [2024-11-06 15:04:34.368475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.346  [2024-11-06T15:04:34.880Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:05.605 00:06:05.605 15:04:34 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:05.605 15:04:34 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:05.605 15:04:34 -- dd/common.sh@31 -- # xtrace_disable 00:06:05.605 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.605 { 00:06:05.605 "subsystems": [ 00:06:05.605 { 00:06:05.605 "subsystem": "bdev", 00:06:05.605 "config": [ 00:06:05.605 { 00:06:05.605 "params": { 00:06:05.605 "trtype": "pcie", 00:06:05.605 "traddr": "0000:00:06.0", 00:06:05.605 "name": "Nvme0" 00:06:05.605 }, 00:06:05.605 "method": "bdev_nvme_attach_controller" 00:06:05.605 }, 00:06:05.605 { 00:06:05.605 "method": "bdev_wait_for_examine" 00:06:05.605 } 00:06:05.605 ] 00:06:05.605 } 00:06:05.605 ] 00:06:05.605 } 00:06:05.605 [2024-11-06 15:04:34.721827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.605 [2024-11-06 15:04:34.721932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57860 ] 00:06:05.605 [2024-11-06 15:04:34.851616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.864 [2024-11-06 15:04:34.908320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.864  [2024-11-06T15:04:35.397Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:06.122 00:06:06.123 15:04:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.123 15:04:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:06.123 15:04:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:06.123 15:04:35 -- dd/common.sh@11 -- # local nvme_ref= 00:06:06.123 15:04:35 -- dd/common.sh@12 -- # local size=57344 00:06:06.123 15:04:35 -- dd/common.sh@14 -- # local bs=1048576 00:06:06.123 15:04:35 -- dd/common.sh@15 -- # local count=1 00:06:06.123 15:04:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:06.123 15:04:35 -- dd/common.sh@18 -- # gen_conf 00:06:06.123 15:04:35 -- dd/common.sh@31 -- # xtrace_disable 00:06:06.123 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.123 [2024-11-06 15:04:35.264400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.123 [2024-11-06 15:04:35.264495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57868 ] 00:06:06.123 { 00:06:06.123 "subsystems": [ 00:06:06.123 { 00:06:06.123 "subsystem": "bdev", 00:06:06.123 "config": [ 00:06:06.123 { 00:06:06.123 "params": { 00:06:06.123 "trtype": "pcie", 00:06:06.123 "traddr": "0000:00:06.0", 00:06:06.123 "name": "Nvme0" 00:06:06.123 }, 00:06:06.123 "method": "bdev_nvme_attach_controller" 00:06:06.123 }, 00:06:06.123 { 00:06:06.123 "method": "bdev_wait_for_examine" 00:06:06.123 } 00:06:06.123 ] 00:06:06.123 } 00:06:06.123 ] 00:06:06.123 } 00:06:06.381 [2024-11-06 15:04:35.399631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.381 [2024-11-06 15:04:35.448498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.381  [2024-11-06T15:04:35.915Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:06.640 00:06:06.640 15:04:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:06.640 15:04:35 -- dd/basic_rw.sh@23 -- # count=7 00:06:06.640 15:04:35 -- dd/basic_rw.sh@24 -- # count=7 00:06:06.641 15:04:35 -- dd/basic_rw.sh@25 -- # size=57344 00:06:06.641 15:04:35 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:06.641 15:04:35 -- dd/common.sh@98 -- # xtrace_disable 00:06:06.641 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:07.208 15:04:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:07.208 15:04:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:07.208 15:04:36 -- dd/common.sh@31 -- # xtrace_disable 00:06:07.208 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:06:07.208 [2024-11-06 15:04:36.318293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.208 { 00:06:07.208 "subsystems": [ 00:06:07.208 { 00:06:07.208 "subsystem": "bdev", 00:06:07.208 "config": [ 00:06:07.208 { 00:06:07.208 "params": { 00:06:07.208 "trtype": "pcie", 00:06:07.208 "traddr": "0000:00:06.0", 00:06:07.208 "name": "Nvme0" 00:06:07.208 }, 00:06:07.208 "method": "bdev_nvme_attach_controller" 00:06:07.208 }, 00:06:07.208 { 00:06:07.208 "method": "bdev_wait_for_examine" 00:06:07.208 } 00:06:07.208 ] 00:06:07.208 } 00:06:07.208 ] 00:06:07.208 } 00:06:07.208 [2024-11-06 15:04:36.318370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57886 ] 00:06:07.208 [2024-11-06 15:04:36.455569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.467 [2024-11-06 15:04:36.504786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.467  [2024-11-06T15:04:37.001Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:07.726 00:06:07.726 15:04:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:07.726 15:04:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:07.726 15:04:36 -- dd/common.sh@31 -- # xtrace_disable 00:06:07.726 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:06:07.726 { 00:06:07.726 "subsystems": [ 00:06:07.726 { 00:06:07.726 "subsystem": "bdev", 00:06:07.726 "config": [ 00:06:07.726 { 00:06:07.726 "params": { 00:06:07.726 "trtype": "pcie", 00:06:07.726 "traddr": "0000:00:06.0", 00:06:07.726 "name": "Nvme0" 00:06:07.726 }, 00:06:07.726 "method": "bdev_nvme_attach_controller" 00:06:07.726 }, 00:06:07.726 { 00:06:07.726 "method": "bdev_wait_for_examine" 00:06:07.726 } 00:06:07.726 ] 00:06:07.726 } 00:06:07.726 ] 00:06:07.726 } 00:06:07.726 [2024-11-06 15:04:36.856131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.726 [2024-11-06 15:04:36.856221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57906 ] 00:06:07.726 [2024-11-06 15:04:36.992291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.984 [2024-11-06 15:04:37.044060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.984  [2024-11-06T15:04:37.517Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:08.242 00:06:08.242 15:04:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.242 15:04:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:08.242 15:04:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:08.242 15:04:37 -- dd/common.sh@11 -- # local nvme_ref= 00:06:08.243 15:04:37 -- dd/common.sh@12 -- # local size=57344 00:06:08.243 15:04:37 -- dd/common.sh@14 -- # local bs=1048576 00:06:08.243 15:04:37 -- dd/common.sh@15 -- # local count=1 00:06:08.243 15:04:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:08.243 15:04:37 -- dd/common.sh@18 -- # gen_conf 00:06:08.243 15:04:37 -- dd/common.sh@31 -- # xtrace_disable 00:06:08.243 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 { 00:06:08.243 "subsystems": [ 00:06:08.243 { 00:06:08.243 "subsystem": "bdev", 00:06:08.243 "config": [ 00:06:08.243 { 00:06:08.243 "params": { 00:06:08.243 "trtype": "pcie", 00:06:08.243 "traddr": "0000:00:06.0", 00:06:08.243 "name": "Nvme0" 00:06:08.243 }, 00:06:08.243 "method": "bdev_nvme_attach_controller" 00:06:08.243 }, 00:06:08.243 { 00:06:08.243 "method": "bdev_wait_for_examine" 00:06:08.243 } 00:06:08.243 ] 00:06:08.243 } 00:06:08.243 ] 00:06:08.243 } 00:06:08.243 [2024-11-06 15:04:37.425307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.243 [2024-11-06 15:04:37.425457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:06:08.501 [2024-11-06 15:04:37.564593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.501 [2024-11-06 15:04:37.617087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.501  [2024-11-06T15:04:38.035Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:08.760 00:06:08.760 15:04:37 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:08.760 15:04:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:08.760 15:04:37 -- dd/basic_rw.sh@23 -- # count=3 00:06:08.760 15:04:37 -- dd/basic_rw.sh@24 -- # count=3 00:06:08.760 15:04:37 -- dd/basic_rw.sh@25 -- # size=49152 00:06:08.760 15:04:37 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:08.760 15:04:37 -- dd/common.sh@98 -- # xtrace_disable 00:06:08.760 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.327 15:04:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:09.327 15:04:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:09.327 15:04:38 -- dd/common.sh@31 -- # xtrace_disable 00:06:09.327 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.327 { 00:06:09.327 "subsystems": [ 00:06:09.327 { 00:06:09.327 "subsystem": "bdev", 00:06:09.327 "config": [ 00:06:09.327 { 00:06:09.327 "params": { 00:06:09.327 "trtype": "pcie", 00:06:09.327 "traddr": "0000:00:06.0", 00:06:09.327 "name": "Nvme0" 00:06:09.327 }, 00:06:09.327 "method": "bdev_nvme_attach_controller" 00:06:09.327 }, 00:06:09.327 { 00:06:09.327 "method": "bdev_wait_for_examine" 00:06:09.327 } 00:06:09.327 ] 00:06:09.327 } 00:06:09.327 ] 00:06:09.327 } 00:06:09.328 [2024-11-06 15:04:38.493794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.328 [2024-11-06 15:04:38.493916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57939 ] 00:06:09.586 [2024-11-06 15:04:38.640016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.586 [2024-11-06 15:04:38.697528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.586  [2024-11-06T15:04:39.121Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:09.846 00:06:09.846 15:04:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:09.846 15:04:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:09.846 15:04:38 -- dd/common.sh@31 -- # xtrace_disable 00:06:09.846 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.846 [2024-11-06 15:04:39.040368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.846 [2024-11-06 15:04:39.040487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 00:06:09.846 { 00:06:09.846 "subsystems": [ 00:06:09.846 { 00:06:09.846 "subsystem": "bdev", 00:06:09.846 "config": [ 00:06:09.846 { 00:06:09.846 "params": { 00:06:09.846 "trtype": "pcie", 00:06:09.846 "traddr": "0000:00:06.0", 00:06:09.846 "name": "Nvme0" 00:06:09.846 }, 00:06:09.846 "method": "bdev_nvme_attach_controller" 00:06:09.846 }, 00:06:09.846 { 00:06:09.846 "method": "bdev_wait_for_examine" 00:06:09.846 } 00:06:09.846 ] 00:06:09.846 } 00:06:09.846 ] 00:06:09.846 } 00:06:10.105 [2024-11-06 15:04:39.176682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.105 [2024-11-06 15:04:39.230929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.105  [2024-11-06T15:04:39.638Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:10.363 00:06:10.364 15:04:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.364 15:04:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:10.364 15:04:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:10.364 15:04:39 -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.364 15:04:39 -- dd/common.sh@12 -- # local size=49152 00:06:10.364 15:04:39 -- dd/common.sh@14 -- # local bs=1048576 00:06:10.364 15:04:39 -- dd/common.sh@15 -- # local count=1 00:06:10.364 15:04:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:10.364 15:04:39 -- dd/common.sh@18 -- # gen_conf 00:06:10.364 15:04:39 -- dd/common.sh@31 -- # xtrace_disable 00:06:10.364 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:06:10.364 [2024-11-06 15:04:39.581912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.364 [2024-11-06 15:04:39.582001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57969 ] 00:06:10.364 { 00:06:10.364 "subsystems": [ 00:06:10.364 { 00:06:10.364 "subsystem": "bdev", 00:06:10.364 "config": [ 00:06:10.364 { 00:06:10.364 "params": { 00:06:10.364 "trtype": "pcie", 00:06:10.364 "traddr": "0000:00:06.0", 00:06:10.364 "name": "Nvme0" 00:06:10.364 }, 00:06:10.364 "method": "bdev_nvme_attach_controller" 00:06:10.364 }, 00:06:10.364 { 00:06:10.364 "method": "bdev_wait_for_examine" 00:06:10.364 } 00:06:10.364 ] 00:06:10.364 } 00:06:10.364 ] 00:06:10.364 } 00:06:10.622 [2024-11-06 15:04:39.717045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.622 [2024-11-06 15:04:39.771313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.622  [2024-11-06T15:04:40.155Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:10.880 00:06:10.880 15:04:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:10.880 15:04:40 -- dd/basic_rw.sh@23 -- # count=3 00:06:10.880 15:04:40 -- dd/basic_rw.sh@24 -- # count=3 00:06:10.880 15:04:40 -- dd/basic_rw.sh@25 -- # size=49152 00:06:10.880 15:04:40 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:10.880 15:04:40 -- dd/common.sh@98 -- # xtrace_disable 00:06:10.880 15:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:11.448 15:04:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:11.448 15:04:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:11.448 15:04:40 -- dd/common.sh@31 -- # xtrace_disable 00:06:11.448 15:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:11.448 [2024-11-06 15:04:40.610937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.448 [2024-11-06 15:04:40.611027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57983 ] 00:06:11.448 { 00:06:11.448 "subsystems": [ 00:06:11.448 { 00:06:11.448 "subsystem": "bdev", 00:06:11.448 "config": [ 00:06:11.448 { 00:06:11.448 "params": { 00:06:11.448 "trtype": "pcie", 00:06:11.448 "traddr": "0000:00:06.0", 00:06:11.448 "name": "Nvme0" 00:06:11.448 }, 00:06:11.448 "method": "bdev_nvme_attach_controller" 00:06:11.448 }, 00:06:11.448 { 00:06:11.448 "method": "bdev_wait_for_examine" 00:06:11.448 } 00:06:11.448 ] 00:06:11.448 } 00:06:11.448 ] 00:06:11.448 } 00:06:11.707 [2024-11-06 15:04:40.747493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.707 [2024-11-06 15:04:40.803910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.707  [2024-11-06T15:04:41.241Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:11.966 00:06:11.966 15:04:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:11.966 15:04:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.966 15:04:41 -- dd/common.sh@31 -- # xtrace_disable 00:06:11.966 15:04:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.966 [2024-11-06 15:04:41.152028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.966 [2024-11-06 15:04:41.152557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57995 ] 00:06:11.966 { 00:06:11.966 "subsystems": [ 00:06:11.966 { 00:06:11.966 "subsystem": "bdev", 00:06:11.966 "config": [ 00:06:11.966 { 00:06:11.966 "params": { 00:06:11.966 "trtype": "pcie", 00:06:11.966 "traddr": "0000:00:06.0", 00:06:11.966 "name": "Nvme0" 00:06:11.966 }, 00:06:11.966 "method": "bdev_nvme_attach_controller" 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "method": "bdev_wait_for_examine" 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 } 00:06:12.224 [2024-11-06 15:04:41.289680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.224 [2024-11-06 15:04:41.344152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.224  [2024-11-06T15:04:41.758Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:12.483 00:06:12.483 15:04:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.483 15:04:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:12.483 15:04:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:12.483 15:04:41 -- dd/common.sh@11 -- # local nvme_ref= 00:06:12.483 15:04:41 -- dd/common.sh@12 -- # local size=49152 00:06:12.483 15:04:41 -- dd/common.sh@14 -- # local bs=1048576 00:06:12.483 15:04:41 -- dd/common.sh@15 -- # local count=1 00:06:12.483 15:04:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:12.483 15:04:41 -- dd/common.sh@18 -- # gen_conf 00:06:12.483 15:04:41 -- dd/common.sh@31 -- # xtrace_disable 00:06:12.483 15:04:41 -- common/autotest_common.sh@10 -- # set +x 00:06:12.483 [2024-11-06 15:04:41.704595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.483 [2024-11-06 15:04:41.704739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58013 ] 00:06:12.483 { 00:06:12.483 "subsystems": [ 00:06:12.483 { 00:06:12.483 "subsystem": "bdev", 00:06:12.483 "config": [ 00:06:12.483 { 00:06:12.483 "params": { 00:06:12.483 "trtype": "pcie", 00:06:12.483 "traddr": "0000:00:06.0", 00:06:12.483 "name": "Nvme0" 00:06:12.483 }, 00:06:12.483 "method": "bdev_nvme_attach_controller" 00:06:12.483 }, 00:06:12.483 { 00:06:12.483 "method": "bdev_wait_for_examine" 00:06:12.483 } 00:06:12.483 ] 00:06:12.483 } 00:06:12.483 ] 00:06:12.483 } 00:06:12.741 [2024-11-06 15:04:41.844059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.741 [2024-11-06 15:04:41.894340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.741  [2024-11-06T15:04:42.275Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:13.000 00:06:13.000 00:06:13.000 real 0m12.841s 00:06:13.000 user 0m9.508s 00:06:13.000 sys 0m2.207s 00:06:13.000 15:04:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.000 15:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.000 ************************************ 00:06:13.000 END TEST dd_rw 00:06:13.000 ************************************ 00:06:13.000 15:04:42 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:13.000 15:04:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.000 15:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.000 15:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.000 ************************************ 00:06:13.000 START TEST dd_rw_offset 00:06:13.000 ************************************ 00:06:13.000 15:04:42 -- common/autotest_common.sh@1114 -- # basic_offset 00:06:13.000 15:04:42 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:13.000 15:04:42 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:13.000 15:04:42 -- dd/common.sh@98 -- # xtrace_disable 00:06:13.000 15:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.259 15:04:42 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:13.259 15:04:42 -- dd/basic_rw.sh@56 -- # data=2jpm5tinx4f79cqslh26y9pcao558810dhxl8w27hlehjfua3d25qk9qv82oi5ql17czl6xx1u419ssogjbze0h0sg7u2dqmaa34xzvwsjmlsbjqgur9kpq8eeg8voyccapdyqfpdtqqglnyg5v4nbdg8yia2sn4q911s2ineyizeisd8054dk4579rv1dddemgcyxv85pswvyvoxr1bvlcwwirkbmt5jlw8l0fsq7ywduqpda234yi8o13am7drpnnekjwaokegsspsfewr4totawfaa190uvjmkoz2ltujduk2wjcy3a4zyomdtsggpgn4sebg2f4t9l7ccpcg4sux6fsrjob423le488qixte6wcgbk5tkkpj6848nk5pnxfd0rikhp4yhj788matbaxo5x2nagpvkzb3hsi96yx7akob0yefnmxj8zfr0cd28xcpf2poknw0wcks2b859fitjfixpfqapk7z5k0siu3h4rlztq0eymm47ihvel4w4eekxzr43792ill61y06vudmgip4r8kdlqg4shgz9urv1aniz4fe44j0iu6kslq80xoqp9cp5ii7hkcsfkhpubnckxpbmsdvzb8va28hh0mdvoj6ev46uutwau7ulwggy1xh96o3o6h9l8x81bttcj5y8qiwx1cp6yh1cqt6uv3dyqrg74yjiynv2tn3hqq59vzuerb785n426cryjnqsr70t3b4rvzwfobdwzmzmdkrg2grve7va57sdwf4ey651z7watgel9m4giakejvjq5n3j1x0tkwwj8p2nufqbh97916nmw62myhlyai97rqubwga2aa563bjfmcbada7frrxxnsommrn9ein3o1y60207ntfs6mv4fdb7rydrzd6gxiu5vsijy0xrbagtalv6m6b4jptilp8thx0adsla1wqq3tw9nhhakf8ast3ownv9qjezb4l8z87lszfifc35xeedfaurgqpjm46q2ulcqravh9ainua6yluwyiru5snd0gduradynx41h3jibmrgy0vmk3oelpnzr7ssj51ga7vskp42cd0l1vgbqriyhztouw496geflh39vd997sg81wu35ao7e4v3idwlnwp27jgyyrofs9ib5w8vvkvt8z4rf2vboii8ea4jursjbqc3g0u12i1e1s6dxbexp06osz2mmf4anl048d9ehtjftplp9kabjq4wq9b1kkisfuxi19p9eqwvyulsa87bu4xguuz6p013zigw59hx4hl2llz8bhizz60ab5p36r7nkp79jf42eylob5hlzgf59rhv4zy8ctpm7bmyf0m84w9jbmtz2v054r5kqjxzw2a40s8gj3yjqkrf53ghu7ju8wteyp7174hqms2vyjyvbernoax4li670wv3b33d9nyq0ntq220hngo5ukww9gpghac4elv31hx3i1mlby4s17y1pglido79l1mazoudaoqqjoawl9bxr4es0nnf0l1h6f9iobnq6ztlo306wqv01fzqii5k5in12ijhcjpoz466nu4d0x3g9t5bdxyu57qu1a99l6uo0w77pdla182gfo2pf31un78wncwn6v79nocaytne5hist391j48duyzlkcncvsrulshsfndcgomnu4gne5oatr0wviqfrj7v7hgazhtlz85iuxq0bxeq8xyvar68eht1h149k0b26w7wc6weg53cy6x07hzts7bu9qow8ljorux5mrrv9riessum41k6vp0ae5p1p6yvy9l75pqc9x76ykvl9lxlih86t98jxtvxe8beutd43v2gqz8fsptjlcx1u4lpq4q2elgds1huyoxg0sz6gktr7v41hslzo1aqg3f4idk77q0d08a4exbcnlihlp32wtm975asucl2yc80iqtn89rhb4wxtd4a4ril5sastabmgwgt911pt9051zb581tpn6lvn87jhw21bzs7ts0evbgvhprme7kpbc91j5k1m8wxtm09k2proe4jx5lgszl2el9w0ag15e7rzlpxwvd7kqqrdrz1lfjr7qe2a3gpwsvczno60z1kmoh0f6bylklejcjzdnjqp7wzu4pm6xmeef9xqv3incgddnri4qjgtah02pzbttyxm4uhgsuwiewpin03gepamev8w8xx05ivnp0rrd8rqr907s2vl5nef11e2wk1k0zc9q701k3qtue950k0s29hu2ywoe8quvbptgplja31llihb9q477x2yketimq0e5z9nklgivsltu7lvde2i4tgu0z086wkw1utsg9h895lsscswr0spbtc1tjp2dveih91nq1g5wxjbzvq8r41dei1o38uek8eow7vt5wgvdfi7bj0wlc5r516dkbdvrm8dijeux5isgbzdht9xeirn6vzgu951g1np0z6qqcw79vgdasph1r58iogi0ql226jqkpw8nlrt252fuolgp098iabl0rdu69jevxzor7l21nmkz5v5or54cvgjijgxqwjqg13a499bv7mey6jufig74q2y9xbhkc508z2jz2d3wn8o503djzh5lp2m5nn8bmn6i2hnxzgqqj0up6k81rbn0z6ykhkg7sahj6p3ku52yfb173yfswqdn9amndlg865kob99pegpx4kr5t9rft1bnh5gnazx0c4uwmnem94n0zfkguman448hep1wlw49t6g9sl8rq0fdkh3a7jj65p4xswo1pxvv5lcykwhlboi0rpgpil1bnw8nvsvicir11utozx96tjh3pn352v7uyzrfs7suqdbtji49vfe7ozqw7z8jw7zrpziyyc7azlzm0d4qamutoz9klc64j7bdecde5a0on2b57krzwua8ny31cibjv10m9tkzwpf20gnbmvybv3c3khkoq0ur2prqrtsjzyq94nt0f2sbtowf9kxogtw5eggdyafzv34nyusvhbvh81l3hr5oel9bq2vgctqute1lute76a3arkomg3cmx1o7jqrskfkvbqjh6g2j0yhqj0pvy593rf9vf0xr5130zo6mlys3h4mh18y25bpkrcu9j62omfhi7n1v9tabfz5bbgnio3ynsyk2cpfhtbte79dktnqn75us9ogh4ect5xextlbp6q8ydjhf0xd6dn58ma1tx2nt3zn32uvb7ayrul967tnjozm5q2vz3zs1dufgnmz2g1ugq0ggxtaln9zzu56lez71f8jda4ql0nqa7ybwhoaxtabeocb5hp17aa44s1xhufrat13iojbldr7820hbxqmt301n8ff6song222g6cfh8qhx0t4shdn5aa8vwabs4hh2fgsxxixrit34b95qzeg2md3782d6ahswbe1eggo5l8wbhzmxe6ur8vw32s50b6rdtfq064t3b0e1vrs9gsasj7reczlz50z9krv84y3pvrvkd9yao5xl4ib4y30ll087o70mn0uj72zaeh6g4s4ck9vrs673csd0a6sl2hdladbagiv6krzse7wkbic9wznjpeiisk3qscgx58ss7qbz2xs82hn5cmjrisq6adr2urssswopyw0vzxlflh5lefthupxlu2gmgpj27fvfcwgzcevkqudo764jyk2lr44251og1mjsgooffqglfdpx6z8c11jeokbsek4vfy120yigoehlrsskp6knenc3jv6z1ivwzmrlts25n0agx559na142en5yekkz4vo9uddhhrkcv9sgjgv34g1xptbl6ptnuwqq91lm9yi5whtvniwo45om2ck7t0ro5wes0u4xnnp217yzshmezte9uosytaoa2h21xd0p9nxzygmmp48ec51di9qg2fd3mq1q8m6pbadawier0dmwn48aw4ls9hkwfxhri0k5etqcscbqgpoagxsna9zgc7xth40x8hxnpjkuazhblgs7f6c42ig94cutms30mt5n97ckid6mb1nt4miwdai7g13g7wsxtdp2vou32shlc9n2h94m9qfnouvnlkw4grkxb5tl5g24snb1kgl9pylbd4hyezci6hzfxiunk96u76gxgxlb6q40fe30 00:06:13.259 15:04:42 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:13.259 15:04:42 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:13.259 15:04:42 -- dd/common.sh@31 -- # xtrace_disable 00:06:13.259 15:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.259 [2024-11-06 15:04:42.355176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.259 [2024-11-06 15:04:42.355296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58038 ] 00:06:13.259 { 00:06:13.259 "subsystems": [ 00:06:13.259 { 00:06:13.259 "subsystem": "bdev", 00:06:13.259 "config": [ 00:06:13.259 { 00:06:13.259 "params": { 00:06:13.259 "trtype": "pcie", 00:06:13.259 "traddr": "0000:00:06.0", 00:06:13.259 "name": "Nvme0" 00:06:13.259 }, 00:06:13.259 "method": "bdev_nvme_attach_controller" 00:06:13.259 }, 00:06:13.259 { 00:06:13.259 "method": "bdev_wait_for_examine" 00:06:13.259 } 00:06:13.260 ] 00:06:13.260 } 00:06:13.260 ] 00:06:13.260 } 00:06:13.260 [2024-11-06 15:04:42.489315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.518 [2024-11-06 15:04:42.540036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.518  [2024-11-06T15:04:43.052Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:13.777 00:06:13.777 15:04:42 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:13.777 15:04:42 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:13.777 15:04:42 -- dd/common.sh@31 -- # xtrace_disable 00:06:13.777 15:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.777 { 00:06:13.777 "subsystems": [ 00:06:13.777 { 00:06:13.777 "subsystem": "bdev", 00:06:13.777 "config": [ 00:06:13.777 { 00:06:13.777 "params": { 00:06:13.777 "trtype": "pcie", 00:06:13.777 "traddr": "0000:00:06.0", 00:06:13.777 "name": "Nvme0" 00:06:13.777 }, 00:06:13.777 "method": "bdev_nvme_attach_controller" 00:06:13.777 }, 00:06:13.777 { 00:06:13.777 "method": "bdev_wait_for_examine" 00:06:13.777 } 00:06:13.777 ] 00:06:13.777 } 00:06:13.777 ] 00:06:13.777 } 00:06:13.777 [2024-11-06 15:04:42.911013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.777 [2024-11-06 15:04:42.911570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58056 ] 00:06:13.777 [2024-11-06 15:04:43.045560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.035 [2024-11-06 15:04:43.099876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.035  [2024-11-06T15:04:43.569Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:14.294 00:06:14.294 15:04:43 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:14.294 ************************************ 00:06:14.294 END TEST dd_rw_offset 00:06:14.294 ************************************ 00:06:14.295 15:04:43 -- dd/basic_rw.sh@72 -- # [[ 2jpm5tinx4f79cqslh26y9pcao558810dhxl8w27hlehjfua3d25qk9qv82oi5ql17czl6xx1u419ssogjbze0h0sg7u2dqmaa34xzvwsjmlsbjqgur9kpq8eeg8voyccapdyqfpdtqqglnyg5v4nbdg8yia2sn4q911s2ineyizeisd8054dk4579rv1dddemgcyxv85pswvyvoxr1bvlcwwirkbmt5jlw8l0fsq7ywduqpda234yi8o13am7drpnnekjwaokegsspsfewr4totawfaa190uvjmkoz2ltujduk2wjcy3a4zyomdtsggpgn4sebg2f4t9l7ccpcg4sux6fsrjob423le488qixte6wcgbk5tkkpj6848nk5pnxfd0rikhp4yhj788matbaxo5x2nagpvkzb3hsi96yx7akob0yefnmxj8zfr0cd28xcpf2poknw0wcks2b859fitjfixpfqapk7z5k0siu3h4rlztq0eymm47ihvel4w4eekxzr43792ill61y06vudmgip4r8kdlqg4shgz9urv1aniz4fe44j0iu6kslq80xoqp9cp5ii7hkcsfkhpubnckxpbmsdvzb8va28hh0mdvoj6ev46uutwau7ulwggy1xh96o3o6h9l8x81bttcj5y8qiwx1cp6yh1cqt6uv3dyqrg74yjiynv2tn3hqq59vzuerb785n426cryjnqsr70t3b4rvzwfobdwzmzmdkrg2grve7va57sdwf4ey651z7watgel9m4giakejvjq5n3j1x0tkwwj8p2nufqbh97916nmw62myhlyai97rqubwga2aa563bjfmcbada7frrxxnsommrn9ein3o1y60207ntfs6mv4fdb7rydrzd6gxiu5vsijy0xrbagtalv6m6b4jptilp8thx0adsla1wqq3tw9nhhakf8ast3ownv9qjezb4l8z87lszfifc35xeedfaurgqpjm46q2ulcqravh9ainua6yluwyiru5snd0gduradynx41h3jibmrgy0vmk3oelpnzr7ssj51ga7vskp42cd0l1vgbqriyhztouw496geflh39vd997sg81wu35ao7e4v3idwlnwp27jgyyrofs9ib5w8vvkvt8z4rf2vboii8ea4jursjbqc3g0u12i1e1s6dxbexp06osz2mmf4anl048d9ehtjftplp9kabjq4wq9b1kkisfuxi19p9eqwvyulsa87bu4xguuz6p013zigw59hx4hl2llz8bhizz60ab5p36r7nkp79jf42eylob5hlzgf59rhv4zy8ctpm7bmyf0m84w9jbmtz2v054r5kqjxzw2a40s8gj3yjqkrf53ghu7ju8wteyp7174hqms2vyjyvbernoax4li670wv3b33d9nyq0ntq220hngo5ukww9gpghac4elv31hx3i1mlby4s17y1pglido79l1mazoudaoqqjoawl9bxr4es0nnf0l1h6f9iobnq6ztlo306wqv01fzqii5k5in12ijhcjpoz466nu4d0x3g9t5bdxyu57qu1a99l6uo0w77pdla182gfo2pf31un78wncwn6v79nocaytne5hist391j48duyzlkcncvsrulshsfndcgomnu4gne5oatr0wviqfrj7v7hgazhtlz85iuxq0bxeq8xyvar68eht1h149k0b26w7wc6weg53cy6x07hzts7bu9qow8ljorux5mrrv9riessum41k6vp0ae5p1p6yvy9l75pqc9x76ykvl9lxlih86t98jxtvxe8beutd43v2gqz8fsptjlcx1u4lpq4q2elgds1huyoxg0sz6gktr7v41hslzo1aqg3f4idk77q0d08a4exbcnlihlp32wtm975asucl2yc80iqtn89rhb4wxtd4a4ril5sastabmgwgt911pt9051zb581tpn6lvn87jhw21bzs7ts0evbgvhprme7kpbc91j5k1m8wxtm09k2proe4jx5lgszl2el9w0ag15e7rzlpxwvd7kqqrdrz1lfjr7qe2a3gpwsvczno60z1kmoh0f6bylklejcjzdnjqp7wzu4pm6xmeef9xqv3incgddnri4qjgtah02pzbttyxm4uhgsuwiewpin03gepamev8w8xx05ivnp0rrd8rqr907s2vl5nef11e2wk1k0zc9q701k3qtue950k0s29hu2ywoe8quvbptgplja31llihb9q477x2yketimq0e5z9nklgivsltu7lvde2i4tgu0z086wkw1utsg9h895lsscswr0spbtc1tjp2dveih91nq1g5wxjbzvq8r41dei1o38uek8eow7vt5wgvdfi7bj0wlc5r516dkbdvrm8dijeux5isgbzdht9xeirn6vzgu951g1np0z6qqcw79vgdasph1r58iogi0ql226jqkpw8nlrt252fuolgp098iabl0rdu69jevxzor7l21nmkz5v5or54cvgjijgxqwjqg13a499bv7mey6jufig74q2y9xbhkc508z2jz2d3wn8o503djzh5lp2m5nn8bmn6i2hnxzgqqj0up6k81rbn0z6ykhkg7sahj6p3ku52yfb173yfswqdn9amndlg865kob99pegpx4kr5t9rft1bnh5gnazx0c4uwmnem94n0zfkguman448hep1wlw49t6g9sl8rq0fdkh3a7jj65p4xswo1pxvv5lcykwhlboi0rpgpil1bnw8nvsvicir11utozx96tjh3pn352v7uyzrfs7suqdbtji49vfe7ozqw7z8jw7zrpziyyc7azlzm0d4qamutoz9klc64j7bdecde5a0on2b57krzwua8ny31cibjv10m9tkzwpf20gnbmvybv3c3khkoq0ur2prqrtsjzyq94nt0f2sbtowf9kxogtw5eggdyafzv34nyusvhbvh81l3hr5oel9bq2vgctqute1lute76a3arkomg3cmx1o7jqrskfkvbqjh6g2j0yhqj0pvy593rf9vf0xr5130zo6mlys3h4mh18y25bpkrcu9j62omfhi7n1v9tabfz5bbgnio3ynsyk2cpfhtbte79dktnqn75us9ogh4ect5xextlbp6q8ydjhf0xd6dn58ma1tx2nt3zn32uvb7ayrul967tnjozm5q2vz3zs1dufgnmz2g1ugq0ggxtaln9zzu56lez71f8jda4ql0nqa7ybwhoaxtabeocb5hp17aa44s1xhufrat13iojbldr7820hbxqmt301n8ff6song222g6cfh8qhx0t4shdn5aa8vwabs4hh2fgsxxixrit34b95qzeg2md3782d6ahswbe1eggo5l8wbhzmxe6ur8vw32s50b6rdtfq064t3b0e1vrs9gsasj7reczlz50z9krv84y3pvrvkd9yao5xl4ib4y30ll087o70mn0uj72zaeh6g4s4ck9vrs673csd0a6sl2hdladbagiv6krzse7wkbic9wznjpeiisk3qscgx58ss7qbz2xs82hn5cmjrisq6adr2urssswopyw0vzxlflh5lefthupxlu2gmgpj27fvfcwgzcevkqudo764jyk2lr44251og1mjsgooffqglfdpx6z8c11jeokbsek4vfy120yigoehlrsskp6knenc3jv6z1ivwzmrlts25n0agx559na142en5yekkz4vo9uddhhrkcv9sgjgv34g1xptbl6ptnuwqq91lm9yi5whtvniwo45om2ck7t0ro5wes0u4xnnp217yzshmezte9uosytaoa2h21xd0p9nxzygmmp48ec51di9qg2fd3mq1q8m6pbadawier0dmwn48aw4ls9hkwfxhri0k5etqcscbqgpoagxsna9zgc7xth40x8hxnpjkuazhblgs7f6c42ig94cutms30mt5n97ckid6mb1nt4miwdai7g13g7wsxtdp2vou32shlc9n2h94m9qfnouvnlkw4grkxb5tl5g24snb1kgl9pylbd4hyezci6hzfxiunk96u76gxgxlb6q40fe30 == \2\j\p\m\5\t\i\n\x\4\f\7\9\c\q\s\l\h\2\6\y\9\p\c\a\o\5\5\8\8\1\0\d\h\x\l\8\w\2\7\h\l\e\h\j\f\u\a\3\d\2\5\q\k\9\q\v\8\2\o\i\5\q\l\1\7\c\z\l\6\x\x\1\u\4\1\9\s\s\o\g\j\b\z\e\0\h\0\s\g\7\u\2\d\q\m\a\a\3\4\x\z\v\w\s\j\m\l\s\b\j\q\g\u\r\9\k\p\q\8\e\e\g\8\v\o\y\c\c\a\p\d\y\q\f\p\d\t\q\q\g\l\n\y\g\5\v\4\n\b\d\g\8\y\i\a\2\s\n\4\q\9\1\1\s\2\i\n\e\y\i\z\e\i\s\d\8\0\5\4\d\k\4\5\7\9\r\v\1\d\d\d\e\m\g\c\y\x\v\8\5\p\s\w\v\y\v\o\x\r\1\b\v\l\c\w\w\i\r\k\b\m\t\5\j\l\w\8\l\0\f\s\q\7\y\w\d\u\q\p\d\a\2\3\4\y\i\8\o\1\3\a\m\7\d\r\p\n\n\e\k\j\w\a\o\k\e\g\s\s\p\s\f\e\w\r\4\t\o\t\a\w\f\a\a\1\9\0\u\v\j\m\k\o\z\2\l\t\u\j\d\u\k\2\w\j\c\y\3\a\4\z\y\o\m\d\t\s\g\g\p\g\n\4\s\e\b\g\2\f\4\t\9\l\7\c\c\p\c\g\4\s\u\x\6\f\s\r\j\o\b\4\2\3\l\e\4\8\8\q\i\x\t\e\6\w\c\g\b\k\5\t\k\k\p\j\6\8\4\8\n\k\5\p\n\x\f\d\0\r\i\k\h\p\4\y\h\j\7\8\8\m\a\t\b\a\x\o\5\x\2\n\a\g\p\v\k\z\b\3\h\s\i\9\6\y\x\7\a\k\o\b\0\y\e\f\n\m\x\j\8\z\f\r\0\c\d\2\8\x\c\p\f\2\p\o\k\n\w\0\w\c\k\s\2\b\8\5\9\f\i\t\j\f\i\x\p\f\q\a\p\k\7\z\5\k\0\s\i\u\3\h\4\r\l\z\t\q\0\e\y\m\m\4\7\i\h\v\e\l\4\w\4\e\e\k\x\z\r\4\3\7\9\2\i\l\l\6\1\y\0\6\v\u\d\m\g\i\p\4\r\8\k\d\l\q\g\4\s\h\g\z\9\u\r\v\1\a\n\i\z\4\f\e\4\4\j\0\i\u\6\k\s\l\q\8\0\x\o\q\p\9\c\p\5\i\i\7\h\k\c\s\f\k\h\p\u\b\n\c\k\x\p\b\m\s\d\v\z\b\8\v\a\2\8\h\h\0\m\d\v\o\j\6\e\v\4\6\u\u\t\w\a\u\7\u\l\w\g\g\y\1\x\h\9\6\o\3\o\6\h\9\l\8\x\8\1\b\t\t\c\j\5\y\8\q\i\w\x\1\c\p\6\y\h\1\c\q\t\6\u\v\3\d\y\q\r\g\7\4\y\j\i\y\n\v\2\t\n\3\h\q\q\5\9\v\z\u\e\r\b\7\8\5\n\4\2\6\c\r\y\j\n\q\s\r\7\0\t\3\b\4\r\v\z\w\f\o\b\d\w\z\m\z\m\d\k\r\g\2\g\r\v\e\7\v\a\5\7\s\d\w\f\4\e\y\6\5\1\z\7\w\a\t\g\e\l\9\m\4\g\i\a\k\e\j\v\j\q\5\n\3\j\1\x\0\t\k\w\w\j\8\p\2\n\u\f\q\b\h\9\7\9\1\6\n\m\w\6\2\m\y\h\l\y\a\i\9\7\r\q\u\b\w\g\a\2\a\a\5\6\3\b\j\f\m\c\b\a\d\a\7\f\r\r\x\x\n\s\o\m\m\r\n\9\e\i\n\3\o\1\y\6\0\2\0\7\n\t\f\s\6\m\v\4\f\d\b\7\r\y\d\r\z\d\6\g\x\i\u\5\v\s\i\j\y\0\x\r\b\a\g\t\a\l\v\6\m\6\b\4\j\p\t\i\l\p\8\t\h\x\0\a\d\s\l\a\1\w\q\q\3\t\w\9\n\h\h\a\k\f\8\a\s\t\3\o\w\n\v\9\q\j\e\z\b\4\l\8\z\8\7\l\s\z\f\i\f\c\3\5\x\e\e\d\f\a\u\r\g\q\p\j\m\4\6\q\2\u\l\c\q\r\a\v\h\9\a\i\n\u\a\6\y\l\u\w\y\i\r\u\5\s\n\d\0\g\d\u\r\a\d\y\n\x\4\1\h\3\j\i\b\m\r\g\y\0\v\m\k\3\o\e\l\p\n\z\r\7\s\s\j\5\1\g\a\7\v\s\k\p\4\2\c\d\0\l\1\v\g\b\q\r\i\y\h\z\t\o\u\w\4\9\6\g\e\f\l\h\3\9\v\d\9\9\7\s\g\8\1\w\u\3\5\a\o\7\e\4\v\3\i\d\w\l\n\w\p\2\7\j\g\y\y\r\o\f\s\9\i\b\5\w\8\v\v\k\v\t\8\z\4\r\f\2\v\b\o\i\i\8\e\a\4\j\u\r\s\j\b\q\c\3\g\0\u\1\2\i\1\e\1\s\6\d\x\b\e\x\p\0\6\o\s\z\2\m\m\f\4\a\n\l\0\4\8\d\9\e\h\t\j\f\t\p\l\p\9\k\a\b\j\q\4\w\q\9\b\1\k\k\i\s\f\u\x\i\1\9\p\9\e\q\w\v\y\u\l\s\a\8\7\b\u\4\x\g\u\u\z\6\p\0\1\3\z\i\g\w\5\9\h\x\4\h\l\2\l\l\z\8\b\h\i\z\z\6\0\a\b\5\p\3\6\r\7\n\k\p\7\9\j\f\4\2\e\y\l\o\b\5\h\l\z\g\f\5\9\r\h\v\4\z\y\8\c\t\p\m\7\b\m\y\f\0\m\8\4\w\9\j\b\m\t\z\2\v\0\5\4\r\5\k\q\j\x\z\w\2\a\4\0\s\8\g\j\3\y\j\q\k\r\f\5\3\g\h\u\7\j\u\8\w\t\e\y\p\7\1\7\4\h\q\m\s\2\v\y\j\y\v\b\e\r\n\o\a\x\4\l\i\6\7\0\w\v\3\b\3\3\d\9\n\y\q\0\n\t\q\2\2\0\h\n\g\o\5\u\k\w\w\9\g\p\g\h\a\c\4\e\l\v\3\1\h\x\3\i\1\m\l\b\y\4\s\1\7\y\1\p\g\l\i\d\o\7\9\l\1\m\a\z\o\u\d\a\o\q\q\j\o\a\w\l\9\b\x\r\4\e\s\0\n\n\f\0\l\1\h\6\f\9\i\o\b\n\q\6\z\t\l\o\3\0\6\w\q\v\0\1\f\z\q\i\i\5\k\5\i\n\1\2\i\j\h\c\j\p\o\z\4\6\6\n\u\4\d\0\x\3\g\9\t\5\b\d\x\y\u\5\7\q\u\1\a\9\9\l\6\u\o\0\w\7\7\p\d\l\a\1\8\2\g\f\o\2\p\f\3\1\u\n\7\8\w\n\c\w\n\6\v\7\9\n\o\c\a\y\t\n\e\5\h\i\s\t\3\9\1\j\4\8\d\u\y\z\l\k\c\n\c\v\s\r\u\l\s\h\s\f\n\d\c\g\o\m\n\u\4\g\n\e\5\o\a\t\r\0\w\v\i\q\f\r\j\7\v\7\h\g\a\z\h\t\l\z\8\5\i\u\x\q\0\b\x\e\q\8\x\y\v\a\r\6\8\e\h\t\1\h\1\4\9\k\0\b\2\6\w\7\w\c\6\w\e\g\5\3\c\y\6\x\0\7\h\z\t\s\7\b\u\9\q\o\w\8\l\j\o\r\u\x\5\m\r\r\v\9\r\i\e\s\s\u\m\4\1\k\6\v\p\0\a\e\5\p\1\p\6\y\v\y\9\l\7\5\p\q\c\9\x\7\6\y\k\v\l\9\l\x\l\i\h\8\6\t\9\8\j\x\t\v\x\e\8\b\e\u\t\d\4\3\v\2\g\q\z\8\f\s\p\t\j\l\c\x\1\u\4\l\p\q\4\q\2\e\l\g\d\s\1\h\u\y\o\x\g\0\s\z\6\g\k\t\r\7\v\4\1\h\s\l\z\o\1\a\q\g\3\f\4\i\d\k\7\7\q\0\d\0\8\a\4\e\x\b\c\n\l\i\h\l\p\3\2\w\t\m\9\7\5\a\s\u\c\l\2\y\c\8\0\i\q\t\n\8\9\r\h\b\4\w\x\t\d\4\a\4\r\i\l\5\s\a\s\t\a\b\m\g\w\g\t\9\1\1\p\t\9\0\5\1\z\b\5\8\1\t\p\n\6\l\v\n\8\7\j\h\w\2\1\b\z\s\7\t\s\0\e\v\b\g\v\h\p\r\m\e\7\k\p\b\c\9\1\j\5\k\1\m\8\w\x\t\m\0\9\k\2\p\r\o\e\4\j\x\5\l\g\s\z\l\2\e\l\9\w\0\a\g\1\5\e\7\r\z\l\p\x\w\v\d\7\k\q\q\r\d\r\z\1\l\f\j\r\7\q\e\2\a\3\g\p\w\s\v\c\z\n\o\6\0\z\1\k\m\o\h\0\f\6\b\y\l\k\l\e\j\c\j\z\d\n\j\q\p\7\w\z\u\4\p\m\6\x\m\e\e\f\9\x\q\v\3\i\n\c\g\d\d\n\r\i\4\q\j\g\t\a\h\0\2\p\z\b\t\t\y\x\m\4\u\h\g\s\u\w\i\e\w\p\i\n\0\3\g\e\p\a\m\e\v\8\w\8\x\x\0\5\i\v\n\p\0\r\r\d\8\r\q\r\9\0\7\s\2\v\l\5\n\e\f\1\1\e\2\w\k\1\k\0\z\c\9\q\7\0\1\k\3\q\t\u\e\9\5\0\k\0\s\2\9\h\u\2\y\w\o\e\8\q\u\v\b\p\t\g\p\l\j\a\3\1\l\l\i\h\b\9\q\4\7\7\x\2\y\k\e\t\i\m\q\0\e\5\z\9\n\k\l\g\i\v\s\l\t\u\7\l\v\d\e\2\i\4\t\g\u\0\z\0\8\6\w\k\w\1\u\t\s\g\9\h\8\9\5\l\s\s\c\s\w\r\0\s\p\b\t\c\1\t\j\p\2\d\v\e\i\h\9\1\n\q\1\g\5\w\x\j\b\z\v\q\8\r\4\1\d\e\i\1\o\3\8\u\e\k\8\e\o\w\7\v\t\5\w\g\v\d\f\i\7\b\j\0\w\l\c\5\r\5\1\6\d\k\b\d\v\r\m\8\d\i\j\e\u\x\5\i\s\g\b\z\d\h\t\9\x\e\i\r\n\6\v\z\g\u\9\5\1\g\1\n\p\0\z\6\q\q\c\w\7\9\v\g\d\a\s\p\h\1\r\5\8\i\o\g\i\0\q\l\2\2\6\j\q\k\p\w\8\n\l\r\t\2\5\2\f\u\o\l\g\p\0\9\8\i\a\b\l\0\r\d\u\6\9\j\e\v\x\z\o\r\7\l\2\1\n\m\k\z\5\v\5\o\r\5\4\c\v\g\j\i\j\g\x\q\w\j\q\g\1\3\a\4\9\9\b\v\7\m\e\y\6\j\u\f\i\g\7\4\q\2\y\9\x\b\h\k\c\5\0\8\z\2\j\z\2\d\3\w\n\8\o\5\0\3\d\j\z\h\5\l\p\2\m\5\n\n\8\b\m\n\6\i\2\h\n\x\z\g\q\q\j\0\u\p\6\k\8\1\r\b\n\0\z\6\y\k\h\k\g\7\s\a\h\j\6\p\3\k\u\5\2\y\f\b\1\7\3\y\f\s\w\q\d\n\9\a\m\n\d\l\g\8\6\5\k\o\b\9\9\p\e\g\p\x\4\k\r\5\t\9\r\f\t\1\b\n\h\5\g\n\a\z\x\0\c\4\u\w\m\n\e\m\9\4\n\0\z\f\k\g\u\m\a\n\4\4\8\h\e\p\1\w\l\w\4\9\t\6\g\9\s\l\8\r\q\0\f\d\k\h\3\a\7\j\j\6\5\p\4\x\s\w\o\1\p\x\v\v\5\l\c\y\k\w\h\l\b\o\i\0\r\p\g\p\i\l\1\b\n\w\8\n\v\s\v\i\c\i\r\1\1\u\t\o\z\x\9\6\t\j\h\3\p\n\3\5\2\v\7\u\y\z\r\f\s\7\s\u\q\d\b\t\j\i\4\9\v\f\e\7\o\z\q\w\7\z\8\j\w\7\z\r\p\z\i\y\y\c\7\a\z\l\z\m\0\d\4\q\a\m\u\t\o\z\9\k\l\c\6\4\j\7\b\d\e\c\d\e\5\a\0\o\n\2\b\5\7\k\r\z\w\u\a\8\n\y\3\1\c\i\b\j\v\1\0\m\9\t\k\z\w\p\f\2\0\g\n\b\m\v\y\b\v\3\c\3\k\h\k\o\q\0\u\r\2\p\r\q\r\t\s\j\z\y\q\9\4\n\t\0\f\2\s\b\t\o\w\f\9\k\x\o\g\t\w\5\e\g\g\d\y\a\f\z\v\3\4\n\y\u\s\v\h\b\v\h\8\1\l\3\h\r\5\o\e\l\9\b\q\2\v\g\c\t\q\u\t\e\1\l\u\t\e\7\6\a\3\a\r\k\o\m\g\3\c\m\x\1\o\7\j\q\r\s\k\f\k\v\b\q\j\h\6\g\2\j\0\y\h\q\j\0\p\v\y\5\9\3\r\f\9\v\f\0\x\r\5\1\3\0\z\o\6\m\l\y\s\3\h\4\m\h\1\8\y\2\5\b\p\k\r\c\u\9\j\6\2\o\m\f\h\i\7\n\1\v\9\t\a\b\f\z\5\b\b\g\n\i\o\3\y\n\s\y\k\2\c\p\f\h\t\b\t\e\7\9\d\k\t\n\q\n\7\5\u\s\9\o\g\h\4\e\c\t\5\x\e\x\t\l\b\p\6\q\8\y\d\j\h\f\0\x\d\6\d\n\5\8\m\a\1\t\x\2\n\t\3\z\n\3\2\u\v\b\7\a\y\r\u\l\9\6\7\t\n\j\o\z\m\5\q\2\v\z\3\z\s\1\d\u\f\g\n\m\z\2\g\1\u\g\q\0\g\g\x\t\a\l\n\9\z\z\u\5\6\l\e\z\7\1\f\8\j\d\a\4\q\l\0\n\q\a\7\y\b\w\h\o\a\x\t\a\b\e\o\c\b\5\h\p\1\7\a\a\4\4\s\1\x\h\u\f\r\a\t\1\3\i\o\j\b\l\d\r\7\8\2\0\h\b\x\q\m\t\3\0\1\n\8\f\f\6\s\o\n\g\2\2\2\g\6\c\f\h\8\q\h\x\0\t\4\s\h\d\n\5\a\a\8\v\w\a\b\s\4\h\h\2\f\g\s\x\x\i\x\r\i\t\3\4\b\9\5\q\z\e\g\2\m\d\3\7\8\2\d\6\a\h\s\w\b\e\1\e\g\g\o\5\l\8\w\b\h\z\m\x\e\6\u\r\8\v\w\3\2\s\5\0\b\6\r\d\t\f\q\0\6\4\t\3\b\0\e\1\v\r\s\9\g\s\a\s\j\7\r\e\c\z\l\z\5\0\z\9\k\r\v\8\4\y\3\p\v\r\v\k\d\9\y\a\o\5\x\l\4\i\b\4\y\3\0\l\l\0\8\7\o\7\0\m\n\0\u\j\7\2\z\a\e\h\6\g\4\s\4\c\k\9\v\r\s\6\7\3\c\s\d\0\a\6\s\l\2\h\d\l\a\d\b\a\g\i\v\6\k\r\z\s\e\7\w\k\b\i\c\9\w\z\n\j\p\e\i\i\s\k\3\q\s\c\g\x\5\8\s\s\7\q\b\z\2\x\s\8\2\h\n\5\c\m\j\r\i\s\q\6\a\d\r\2\u\r\s\s\s\w\o\p\y\w\0\v\z\x\l\f\l\h\5\l\e\f\t\h\u\p\x\l\u\2\g\m\g\p\j\2\7\f\v\f\c\w\g\z\c\e\v\k\q\u\d\o\7\6\4\j\y\k\2\l\r\4\4\2\5\1\o\g\1\m\j\s\g\o\o\f\f\q\g\l\f\d\p\x\6\z\8\c\1\1\j\e\o\k\b\s\e\k\4\v\f\y\1\2\0\y\i\g\o\e\h\l\r\s\s\k\p\6\k\n\e\n\c\3\j\v\6\z\1\i\v\w\z\m\r\l\t\s\2\5\n\0\a\g\x\5\5\9\n\a\1\4\2\e\n\5\y\e\k\k\z\4\v\o\9\u\d\d\h\h\r\k\c\v\9\s\g\j\g\v\3\4\g\1\x\p\t\b\l\6\p\t\n\u\w\q\q\9\1\l\m\9\y\i\5\w\h\t\v\n\i\w\o\4\5\o\m\2\c\k\7\t\0\r\o\5\w\e\s\0\u\4\x\n\n\p\2\1\7\y\z\s\h\m\e\z\t\e\9\u\o\s\y\t\a\o\a\2\h\2\1\x\d\0\p\9\n\x\z\y\g\m\m\p\4\8\e\c\5\1\d\i\9\q\g\2\f\d\3\m\q\1\q\8\m\6\p\b\a\d\a\w\i\e\r\0\d\m\w\n\4\8\a\w\4\l\s\9\h\k\w\f\x\h\r\i\0\k\5\e\t\q\c\s\c\b\q\g\p\o\a\g\x\s\n\a\9\z\g\c\7\x\t\h\4\0\x\8\h\x\n\p\j\k\u\a\z\h\b\l\g\s\7\f\6\c\4\2\i\g\9\4\c\u\t\m\s\3\0\m\t\5\n\9\7\c\k\i\d\6\m\b\1\n\t\4\m\i\w\d\a\i\7\g\1\3\g\7\w\s\x\t\d\p\2\v\o\u\3\2\s\h\l\c\9\n\2\h\9\4\m\9\q\f\n\o\u\v\n\l\k\w\4\g\r\k\x\b\5\t\l\5\g\2\4\s\n\b\1\k\g\l\9\p\y\l\b\d\4\h\y\e\z\c\i\6\h\z\f\x\i\u\n\k\9\6\u\7\6\g\x\g\x\l\b\6\q\4\0\f\e\3\0 ]] 00:06:14.295 00:06:14.295 real 0m1.153s 00:06:14.295 user 0m0.805s 00:06:14.295 sys 0m0.225s 00:06:14.295 15:04:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.295 15:04:43 -- common/autotest_common.sh@10 -- # set +x 00:06:14.295 15:04:43 -- dd/basic_rw.sh@1 -- # cleanup 00:06:14.295 15:04:43 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:14.295 15:04:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.295 15:04:43 -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.295 15:04:43 -- dd/common.sh@12 -- # local size=0xffff 00:06:14.295 15:04:43 -- dd/common.sh@14 -- # local bs=1048576 00:06:14.295 15:04:43 -- dd/common.sh@15 -- # local count=1 00:06:14.295 15:04:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.295 15:04:43 -- dd/common.sh@18 -- # gen_conf 00:06:14.295 15:04:43 -- dd/common.sh@31 -- # xtrace_disable 00:06:14.295 15:04:43 -- common/autotest_common.sh@10 -- # set +x 00:06:14.295 [2024-11-06 15:04:43.498505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.295 [2024-11-06 15:04:43.498596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58084 ] 00:06:14.295 { 00:06:14.295 "subsystems": [ 00:06:14.295 { 00:06:14.295 "subsystem": "bdev", 00:06:14.295 "config": [ 00:06:14.295 { 00:06:14.295 "params": { 00:06:14.295 "trtype": "pcie", 00:06:14.295 "traddr": "0000:00:06.0", 00:06:14.295 "name": "Nvme0" 00:06:14.295 }, 00:06:14.295 "method": "bdev_nvme_attach_controller" 00:06:14.295 }, 00:06:14.295 { 00:06:14.295 "method": "bdev_wait_for_examine" 00:06:14.295 } 00:06:14.295 ] 00:06:14.295 } 00:06:14.295 ] 00:06:14.295 } 00:06:14.554 [2024-11-06 15:04:43.622782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.554 [2024-11-06 15:04:43.675736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.554  [2024-11-06T15:04:44.088Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:14.813 00:06:14.813 15:04:43 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.813 ************************************ 00:06:14.813 END TEST spdk_dd_basic_rw 00:06:14.813 ************************************ 00:06:14.813 00:06:14.813 real 0m15.651s 00:06:14.813 user 0m11.341s 00:06:14.813 sys 0m2.855s 00:06:14.813 15:04:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.813 15:04:43 -- common/autotest_common.sh@10 -- # set +x 00:06:14.813 15:04:44 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:14.813 15:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.813 15:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.813 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.813 ************************************ 00:06:14.813 START TEST spdk_dd_posix 00:06:14.813 ************************************ 00:06:14.813 15:04:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:15.073 * Looking for test storage... 00:06:15.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:15.073 15:04:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.073 15:04:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.073 15:04:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.073 15:04:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.073 15:04:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.073 15:04:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.073 15:04:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.073 15:04:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.073 15:04:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.073 15:04:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.073 15:04:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.073 15:04:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.073 15:04:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.073 15:04:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.073 15:04:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.073 15:04:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.073 15:04:44 -- scripts/common.sh@344 -- # : 1 00:06:15.073 15:04:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.073 15:04:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.073 15:04:44 -- scripts/common.sh@364 -- # decimal 1 00:06:15.073 15:04:44 -- scripts/common.sh@352 -- # local d=1 00:06:15.073 15:04:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.073 15:04:44 -- scripts/common.sh@354 -- # echo 1 00:06:15.073 15:04:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.073 15:04:44 -- scripts/common.sh@365 -- # decimal 2 00:06:15.073 15:04:44 -- scripts/common.sh@352 -- # local d=2 00:06:15.073 15:04:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.073 15:04:44 -- scripts/common.sh@354 -- # echo 2 00:06:15.073 15:04:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.073 15:04:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.073 15:04:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.073 15:04:44 -- scripts/common.sh@367 -- # return 0 00:06:15.073 15:04:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.073 15:04:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.073 --rc genhtml_branch_coverage=1 00:06:15.073 --rc genhtml_function_coverage=1 00:06:15.073 --rc genhtml_legend=1 00:06:15.073 --rc geninfo_all_blocks=1 00:06:15.073 --rc geninfo_unexecuted_blocks=1 00:06:15.073 00:06:15.073 ' 00:06:15.073 15:04:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.073 --rc genhtml_branch_coverage=1 00:06:15.073 --rc genhtml_function_coverage=1 00:06:15.073 --rc genhtml_legend=1 00:06:15.073 --rc geninfo_all_blocks=1 00:06:15.073 --rc geninfo_unexecuted_blocks=1 00:06:15.073 00:06:15.073 ' 00:06:15.073 15:04:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.073 --rc genhtml_branch_coverage=1 00:06:15.073 --rc genhtml_function_coverage=1 00:06:15.073 --rc genhtml_legend=1 00:06:15.073 --rc geninfo_all_blocks=1 00:06:15.073 --rc geninfo_unexecuted_blocks=1 00:06:15.073 00:06:15.073 ' 00:06:15.073 15:04:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.073 --rc genhtml_branch_coverage=1 00:06:15.073 --rc genhtml_function_coverage=1 00:06:15.073 --rc genhtml_legend=1 00:06:15.073 --rc geninfo_all_blocks=1 00:06:15.073 --rc geninfo_unexecuted_blocks=1 00:06:15.073 00:06:15.073 ' 00:06:15.073 15:04:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.073 15:04:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.073 15:04:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.073 15:04:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.073 15:04:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.073 15:04:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.073 15:04:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.073 15:04:44 -- paths/export.sh@5 -- # export PATH 00:06:15.073 15:04:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.073 15:04:44 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:15.073 15:04:44 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:15.073 15:04:44 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:15.073 15:04:44 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:15.073 15:04:44 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.073 15:04:44 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.073 15:04:44 -- dd/posix.sh@130 -- # tests 00:06:15.073 15:04:44 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:15.073 * First test run, liburing in use 00:06:15.073 15:04:44 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:15.073 15:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.073 15:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.073 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.073 ************************************ 00:06:15.073 START TEST dd_flag_append 00:06:15.073 ************************************ 00:06:15.073 15:04:44 -- common/autotest_common.sh@1114 -- # append 00:06:15.073 15:04:44 -- dd/posix.sh@16 -- # local dump0 00:06:15.073 15:04:44 -- dd/posix.sh@17 -- # local dump1 00:06:15.073 15:04:44 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:15.073 15:04:44 -- dd/common.sh@98 -- # xtrace_disable 00:06:15.073 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.073 15:04:44 -- dd/posix.sh@19 -- # dump0=rf0map4ahnpkwhnlr1ftank1u3cp29h8 00:06:15.073 15:04:44 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:15.073 15:04:44 -- dd/common.sh@98 -- # xtrace_disable 00:06:15.073 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.073 15:04:44 -- dd/posix.sh@20 -- # dump1=qv6v4aa3qa7d9c6sz5z1joycmpex5ov3 00:06:15.073 15:04:44 -- dd/posix.sh@22 -- # printf %s rf0map4ahnpkwhnlr1ftank1u3cp29h8 00:06:15.073 15:04:44 -- dd/posix.sh@23 -- # printf %s qv6v4aa3qa7d9c6sz5z1joycmpex5ov3 00:06:15.073 15:04:44 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:15.073 [2024-11-06 15:04:44.298532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.073 [2024-11-06 15:04:44.298648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58148 ] 00:06:15.332 [2024-11-06 15:04:44.435672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.332 [2024-11-06 15:04:44.489571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.332  [2024-11-06T15:04:44.866Z] Copying: 32/32 [B] (average 31 kBps) 00:06:15.591 00:06:15.591 15:04:44 -- dd/posix.sh@27 -- # [[ qv6v4aa3qa7d9c6sz5z1joycmpex5ov3rf0map4ahnpkwhnlr1ftank1u3cp29h8 == \q\v\6\v\4\a\a\3\q\a\7\d\9\c\6\s\z\5\z\1\j\o\y\c\m\p\e\x\5\o\v\3\r\f\0\m\a\p\4\a\h\n\p\k\w\h\n\l\r\1\f\t\a\n\k\1\u\3\c\p\2\9\h\8 ]] 00:06:15.591 00:06:15.591 real 0m0.481s 00:06:15.591 user 0m0.259s 00:06:15.591 sys 0m0.101s 00:06:15.591 ************************************ 00:06:15.591 END TEST dd_flag_append 00:06:15.591 ************************************ 00:06:15.591 15:04:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.591 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.591 15:04:44 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:15.591 15:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.591 15:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.591 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.591 ************************************ 00:06:15.591 START TEST dd_flag_directory 00:06:15.591 ************************************ 00:06:15.591 15:04:44 -- common/autotest_common.sh@1114 -- # directory 00:06:15.591 15:04:44 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.591 15:04:44 -- common/autotest_common.sh@650 -- # local es=0 00:06:15.591 15:04:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.591 15:04:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.591 15:04:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.591 15:04:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.591 15:04:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.591 15:04:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.591 15:04:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.591 15:04:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.591 15:04:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.591 15:04:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.592 [2024-11-06 15:04:44.822341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.592 [2024-11-06 15:04:44.822441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58175 ] 00:06:15.850 [2024-11-06 15:04:44.955992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.850 [2024-11-06 15:04:45.010436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.850 [2024-11-06 15:04:45.054840] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.850 [2024-11-06 15:04:45.054907] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.850 [2024-11-06 15:04:45.054935] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.850 [2024-11-06 15:04:45.115325] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:16.109 15:04:45 -- common/autotest_common.sh@653 -- # es=236 00:06:16.109 15:04:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.109 15:04:45 -- common/autotest_common.sh@662 -- # es=108 00:06:16.109 15:04:45 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.109 15:04:45 -- common/autotest_common.sh@670 -- # es=1 00:06:16.109 15:04:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.109 15:04:45 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:16.109 15:04:45 -- common/autotest_common.sh@650 -- # local es=0 00:06:16.109 15:04:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:16.109 15:04:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.109 15:04:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.109 15:04:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.109 15:04:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.109 15:04:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.109 15:04:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.109 15:04:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.109 15:04:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.109 15:04:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:16.109 [2024-11-06 15:04:45.266844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.109 [2024-11-06 15:04:45.266955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58184 ] 00:06:16.367 [2024-11-06 15:04:45.402598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.367 [2024-11-06 15:04:45.455076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.367 [2024-11-06 15:04:45.502683] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.367 [2024-11-06 15:04:45.502755] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.367 [2024-11-06 15:04:45.502768] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.367 [2024-11-06 15:04:45.567103] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:16.626 15:04:45 -- common/autotest_common.sh@653 -- # es=236 00:06:16.626 15:04:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.626 15:04:45 -- common/autotest_common.sh@662 -- # es=108 00:06:16.626 15:04:45 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.626 15:04:45 -- common/autotest_common.sh@670 -- # es=1 00:06:16.626 15:04:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.626 00:06:16.626 real 0m0.889s 00:06:16.626 user 0m0.496s 00:06:16.626 sys 0m0.185s 00:06:16.626 15:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.626 ************************************ 00:06:16.626 END TEST dd_flag_directory 00:06:16.626 ************************************ 00:06:16.626 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:06:16.626 15:04:45 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:16.626 15:04:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.626 15:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.626 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:06:16.626 ************************************ 00:06:16.626 START TEST dd_flag_nofollow 00:06:16.626 ************************************ 00:06:16.626 15:04:45 -- common/autotest_common.sh@1114 -- # nofollow 00:06:16.626 15:04:45 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.626 15:04:45 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.626 15:04:45 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.626 15:04:45 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.626 15:04:45 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.626 15:04:45 -- common/autotest_common.sh@650 -- # local es=0 00:06:16.627 15:04:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.627 15:04:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.627 15:04:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.627 15:04:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.627 15:04:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.627 15:04:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.627 15:04:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.627 15:04:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.627 15:04:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.627 15:04:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.627 [2024-11-06 15:04:45.774936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.627 [2024-11-06 15:04:45.775047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58213 ] 00:06:16.886 [2024-11-06 15:04:45.910970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.886 [2024-11-06 15:04:45.961099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.886 [2024-11-06 15:04:46.008010] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:16.886 [2024-11-06 15:04:46.008095] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:16.886 [2024-11-06 15:04:46.008125] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.886 [2024-11-06 15:04:46.070509] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:17.144 15:04:46 -- common/autotest_common.sh@653 -- # es=216 00:06:17.144 15:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.144 15:04:46 -- common/autotest_common.sh@662 -- # es=88 00:06:17.144 15:04:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.144 15:04:46 -- common/autotest_common.sh@670 -- # es=1 00:06:17.144 15:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.144 15:04:46 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.144 15:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:17.144 15:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.144 15:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.144 15:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.144 15:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.144 15:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.144 15:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.144 15:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.144 15:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.144 15:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.144 15:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.144 [2024-11-06 15:04:46.224068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.144 [2024-11-06 15:04:46.224170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58222 ] 00:06:17.144 [2024-11-06 15:04:46.359483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.144 [2024-11-06 15:04:46.408027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.403 [2024-11-06 15:04:46.453352] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.403 [2024-11-06 15:04:46.453422] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.403 [2024-11-06 15:04:46.453451] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.403 [2024-11-06 15:04:46.515810] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:17.403 15:04:46 -- common/autotest_common.sh@653 -- # es=216 00:06:17.403 15:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.403 15:04:46 -- common/autotest_common.sh@662 -- # es=88 00:06:17.403 15:04:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.403 15:04:46 -- common/autotest_common.sh@670 -- # es=1 00:06:17.403 15:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.403 15:04:46 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:17.403 15:04:46 -- dd/common.sh@98 -- # xtrace_disable 00:06:17.403 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:17.403 15:04:46 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.403 [2024-11-06 15:04:46.677202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.403 [2024-11-06 15:04:46.677314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58230 ] 00:06:17.662 [2024-11-06 15:04:46.815089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.662 [2024-11-06 15:04:46.863640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.662  [2024-11-06T15:04:47.196Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.921 00:06:17.921 15:04:47 -- dd/posix.sh@49 -- # [[ d3pbluqut5ois66j01yvsay4ai8gca4ewdtw4tpm2nb1vgksmdmav5z6ra17isflfom1w2vzii1g29f1odapgx2p7ps2lwkpthrzd5k21orv8orklngsn2ykeuiq44jb4yy3ogc46pfvt1mkqzuh35xwog9hz5nyml1vbvczz5kfzuugtyrf48zrdu6x6ytm2n3rip1yxpjays7jtv495ek9lj8yznu17wxq6udrwtj3zd6ph5r4bmpwewawb0bzh5ni5n1izsh27g6t5qs5rx8cn0m95q9z37pdtzwip1o6yhgniva9h1xzhndae9lr704yxarr5fh1eap2x8499fskw3v2wzxcbbffu74jb0lu7t2n7ldmiw226u5324je2y3c7ah2tkxmqfki7ltmuokvum81ad1t7ln457jrdpsk190ctbskmx5zcixw6b3wykz6tntlx1nl6zl3punuuou50w16jom17g4tm7ch8dvohggvu5491db4lnohcrfl == \d\3\p\b\l\u\q\u\t\5\o\i\s\6\6\j\0\1\y\v\s\a\y\4\a\i\8\g\c\a\4\e\w\d\t\w\4\t\p\m\2\n\b\1\v\g\k\s\m\d\m\a\v\5\z\6\r\a\1\7\i\s\f\l\f\o\m\1\w\2\v\z\i\i\1\g\2\9\f\1\o\d\a\p\g\x\2\p\7\p\s\2\l\w\k\p\t\h\r\z\d\5\k\2\1\o\r\v\8\o\r\k\l\n\g\s\n\2\y\k\e\u\i\q\4\4\j\b\4\y\y\3\o\g\c\4\6\p\f\v\t\1\m\k\q\z\u\h\3\5\x\w\o\g\9\h\z\5\n\y\m\l\1\v\b\v\c\z\z\5\k\f\z\u\u\g\t\y\r\f\4\8\z\r\d\u\6\x\6\y\t\m\2\n\3\r\i\p\1\y\x\p\j\a\y\s\7\j\t\v\4\9\5\e\k\9\l\j\8\y\z\n\u\1\7\w\x\q\6\u\d\r\w\t\j\3\z\d\6\p\h\5\r\4\b\m\p\w\e\w\a\w\b\0\b\z\h\5\n\i\5\n\1\i\z\s\h\2\7\g\6\t\5\q\s\5\r\x\8\c\n\0\m\9\5\q\9\z\3\7\p\d\t\z\w\i\p\1\o\6\y\h\g\n\i\v\a\9\h\1\x\z\h\n\d\a\e\9\l\r\7\0\4\y\x\a\r\r\5\f\h\1\e\a\p\2\x\8\4\9\9\f\s\k\w\3\v\2\w\z\x\c\b\b\f\f\u\7\4\j\b\0\l\u\7\t\2\n\7\l\d\m\i\w\2\2\6\u\5\3\2\4\j\e\2\y\3\c\7\a\h\2\t\k\x\m\q\f\k\i\7\l\t\m\u\o\k\v\u\m\8\1\a\d\1\t\7\l\n\4\5\7\j\r\d\p\s\k\1\9\0\c\t\b\s\k\m\x\5\z\c\i\x\w\6\b\3\w\y\k\z\6\t\n\t\l\x\1\n\l\6\z\l\3\p\u\n\u\u\o\u\5\0\w\1\6\j\o\m\1\7\g\4\t\m\7\c\h\8\d\v\o\h\g\g\v\u\5\4\9\1\d\b\4\l\n\o\h\c\r\f\l ]] 00:06:17.921 00:06:17.921 real 0m1.386s 00:06:17.921 user 0m0.781s 00:06:17.921 sys 0m0.272s 00:06:17.921 15:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.921 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.921 ************************************ 00:06:17.921 END TEST dd_flag_nofollow 00:06:17.921 ************************************ 00:06:17.921 15:04:47 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:17.921 15:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.921 15:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.921 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.921 ************************************ 00:06:17.921 START TEST dd_flag_noatime 00:06:17.921 ************************************ 00:06:17.921 15:04:47 -- common/autotest_common.sh@1114 -- # noatime 00:06:17.921 15:04:47 -- dd/posix.sh@53 -- # local atime_if 00:06:17.921 15:04:47 -- dd/posix.sh@54 -- # local atime_of 00:06:17.921 15:04:47 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:17.921 15:04:47 -- dd/common.sh@98 -- # xtrace_disable 00:06:17.921 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.921 15:04:47 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.921 15:04:47 -- dd/posix.sh@60 -- # atime_if=1730905486 00:06:17.921 15:04:47 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.921 15:04:47 -- dd/posix.sh@61 -- # atime_of=1730905487 00:06:17.921 15:04:47 -- dd/posix.sh@66 -- # sleep 1 00:06:19.299 15:04:48 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.299 [2024-11-06 15:04:48.227530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.299 [2024-11-06 15:04:48.227648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:06:19.299 [2024-11-06 15:04:48.367181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.299 [2024-11-06 15:04:48.435250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.299  [2024-11-06T15:04:48.833Z] Copying: 512/512 [B] (average 500 kBps) 00:06:19.558 00:06:19.558 15:04:48 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.558 15:04:48 -- dd/posix.sh@69 -- # (( atime_if == 1730905486 )) 00:06:19.558 15:04:48 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.558 15:04:48 -- dd/posix.sh@70 -- # (( atime_of == 1730905487 )) 00:06:19.558 15:04:48 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.558 [2024-11-06 15:04:48.751896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.558 [2024-11-06 15:04:48.751999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:06:19.817 [2024-11-06 15:04:48.884929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.817 [2024-11-06 15:04:48.935636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.817  [2024-11-06T15:04:49.351Z] Copying: 512/512 [B] (average 500 kBps) 00:06:20.076 00:06:20.076 15:04:49 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.076 15:04:49 -- dd/posix.sh@73 -- # (( atime_if < 1730905488 )) 00:06:20.076 00:06:20.076 real 0m2.022s 00:06:20.076 user 0m0.545s 00:06:20.076 sys 0m0.210s 00:06:20.076 15:04:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.076 ************************************ 00:06:20.076 END TEST dd_flag_noatime 00:06:20.076 15:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.076 ************************************ 00:06:20.076 15:04:49 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:20.076 15:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.076 15:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.076 15:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.076 ************************************ 00:06:20.076 START TEST dd_flags_misc 00:06:20.077 ************************************ 00:06:20.077 15:04:49 -- common/autotest_common.sh@1114 -- # io 00:06:20.077 15:04:49 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:20.077 15:04:49 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:20.077 15:04:49 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:20.077 15:04:49 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:20.077 15:04:49 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:20.077 15:04:49 -- dd/common.sh@98 -- # xtrace_disable 00:06:20.077 15:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.077 15:04:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:20.077 15:04:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:20.077 [2024-11-06 15:04:49.296373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.077 [2024-11-06 15:04:49.296476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58308 ] 00:06:20.335 [2024-11-06 15:04:49.431908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.335 [2024-11-06 15:04:49.480769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.335  [2024-11-06T15:04:49.869Z] Copying: 512/512 [B] (average 500 kBps) 00:06:20.594 00:06:20.594 15:04:49 -- dd/posix.sh@93 -- # [[ brh4q7l3wyqu95ooo1qbxlooe29c6u3ky368omr8stocl3j4z90xa99138ph5iwtyvlnje8mwmds5xy9gjf7mpyhnlbww8nok9vlovf2wj2jk4rj5q001c1lf8h059bxnae05o16zqt2b550rwgqqqcz2qcfpaknjc64x7l6ughm80cdf4sqorkdws8gelg3h6j26n9cm7m36tley6ve4q11h9qav6c78zmucutc2qn3l5fr2wt6bv252tp9bji9s47evnfc2pj9ru320k4wky97v6nv56cte8dspl080u2kmbmu7q15xx2lgwcbapjfzuofx3mb0xk791x44defazhj6kzh4qsw4viu31mpaltyshz93zstvjfj6nlk72yvokf35j2bb52xpdrkb81od544099ksbx8a2a30eq6fbm4cvn0oywb9duw7zfxbfzghe2cj0674unpewpqb6dback727570zz6ju1iz7pxj9efmk7jcsmsypowx7vrdg3o == \b\r\h\4\q\7\l\3\w\y\q\u\9\5\o\o\o\1\q\b\x\l\o\o\e\2\9\c\6\u\3\k\y\3\6\8\o\m\r\8\s\t\o\c\l\3\j\4\z\9\0\x\a\9\9\1\3\8\p\h\5\i\w\t\y\v\l\n\j\e\8\m\w\m\d\s\5\x\y\9\g\j\f\7\m\p\y\h\n\l\b\w\w\8\n\o\k\9\v\l\o\v\f\2\w\j\2\j\k\4\r\j\5\q\0\0\1\c\1\l\f\8\h\0\5\9\b\x\n\a\e\0\5\o\1\6\z\q\t\2\b\5\5\0\r\w\g\q\q\q\c\z\2\q\c\f\p\a\k\n\j\c\6\4\x\7\l\6\u\g\h\m\8\0\c\d\f\4\s\q\o\r\k\d\w\s\8\g\e\l\g\3\h\6\j\2\6\n\9\c\m\7\m\3\6\t\l\e\y\6\v\e\4\q\1\1\h\9\q\a\v\6\c\7\8\z\m\u\c\u\t\c\2\q\n\3\l\5\f\r\2\w\t\6\b\v\2\5\2\t\p\9\b\j\i\9\s\4\7\e\v\n\f\c\2\p\j\9\r\u\3\2\0\k\4\w\k\y\9\7\v\6\n\v\5\6\c\t\e\8\d\s\p\l\0\8\0\u\2\k\m\b\m\u\7\q\1\5\x\x\2\l\g\w\c\b\a\p\j\f\z\u\o\f\x\3\m\b\0\x\k\7\9\1\x\4\4\d\e\f\a\z\h\j\6\k\z\h\4\q\s\w\4\v\i\u\3\1\m\p\a\l\t\y\s\h\z\9\3\z\s\t\v\j\f\j\6\n\l\k\7\2\y\v\o\k\f\3\5\j\2\b\b\5\2\x\p\d\r\k\b\8\1\o\d\5\4\4\0\9\9\k\s\b\x\8\a\2\a\3\0\e\q\6\f\b\m\4\c\v\n\0\o\y\w\b\9\d\u\w\7\z\f\x\b\f\z\g\h\e\2\c\j\0\6\7\4\u\n\p\e\w\p\q\b\6\d\b\a\c\k\7\2\7\5\7\0\z\z\6\j\u\1\i\z\7\p\x\j\9\e\f\m\k\7\j\c\s\m\s\y\p\o\w\x\7\v\r\d\g\3\o ]] 00:06:20.594 15:04:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:20.594 15:04:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:20.594 [2024-11-06 15:04:49.741269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.594 [2024-11-06 15:04:49.741371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:06:20.594 [2024-11-06 15:04:49.868483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.854 [2024-11-06 15:04:49.919690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.854  [2024-11-06T15:04:50.389Z] Copying: 512/512 [B] (average 500 kBps) 00:06:21.114 00:06:21.114 15:04:50 -- dd/posix.sh@93 -- # [[ brh4q7l3wyqu95ooo1qbxlooe29c6u3ky368omr8stocl3j4z90xa99138ph5iwtyvlnje8mwmds5xy9gjf7mpyhnlbww8nok9vlovf2wj2jk4rj5q001c1lf8h059bxnae05o16zqt2b550rwgqqqcz2qcfpaknjc64x7l6ughm80cdf4sqorkdws8gelg3h6j26n9cm7m36tley6ve4q11h9qav6c78zmucutc2qn3l5fr2wt6bv252tp9bji9s47evnfc2pj9ru320k4wky97v6nv56cte8dspl080u2kmbmu7q15xx2lgwcbapjfzuofx3mb0xk791x44defazhj6kzh4qsw4viu31mpaltyshz93zstvjfj6nlk72yvokf35j2bb52xpdrkb81od544099ksbx8a2a30eq6fbm4cvn0oywb9duw7zfxbfzghe2cj0674unpewpqb6dback727570zz6ju1iz7pxj9efmk7jcsmsypowx7vrdg3o == \b\r\h\4\q\7\l\3\w\y\q\u\9\5\o\o\o\1\q\b\x\l\o\o\e\2\9\c\6\u\3\k\y\3\6\8\o\m\r\8\s\t\o\c\l\3\j\4\z\9\0\x\a\9\9\1\3\8\p\h\5\i\w\t\y\v\l\n\j\e\8\m\w\m\d\s\5\x\y\9\g\j\f\7\m\p\y\h\n\l\b\w\w\8\n\o\k\9\v\l\o\v\f\2\w\j\2\j\k\4\r\j\5\q\0\0\1\c\1\l\f\8\h\0\5\9\b\x\n\a\e\0\5\o\1\6\z\q\t\2\b\5\5\0\r\w\g\q\q\q\c\z\2\q\c\f\p\a\k\n\j\c\6\4\x\7\l\6\u\g\h\m\8\0\c\d\f\4\s\q\o\r\k\d\w\s\8\g\e\l\g\3\h\6\j\2\6\n\9\c\m\7\m\3\6\t\l\e\y\6\v\e\4\q\1\1\h\9\q\a\v\6\c\7\8\z\m\u\c\u\t\c\2\q\n\3\l\5\f\r\2\w\t\6\b\v\2\5\2\t\p\9\b\j\i\9\s\4\7\e\v\n\f\c\2\p\j\9\r\u\3\2\0\k\4\w\k\y\9\7\v\6\n\v\5\6\c\t\e\8\d\s\p\l\0\8\0\u\2\k\m\b\m\u\7\q\1\5\x\x\2\l\g\w\c\b\a\p\j\f\z\u\o\f\x\3\m\b\0\x\k\7\9\1\x\4\4\d\e\f\a\z\h\j\6\k\z\h\4\q\s\w\4\v\i\u\3\1\m\p\a\l\t\y\s\h\z\9\3\z\s\t\v\j\f\j\6\n\l\k\7\2\y\v\o\k\f\3\5\j\2\b\b\5\2\x\p\d\r\k\b\8\1\o\d\5\4\4\0\9\9\k\s\b\x\8\a\2\a\3\0\e\q\6\f\b\m\4\c\v\n\0\o\y\w\b\9\d\u\w\7\z\f\x\b\f\z\g\h\e\2\c\j\0\6\7\4\u\n\p\e\w\p\q\b\6\d\b\a\c\k\7\2\7\5\7\0\z\z\6\j\u\1\i\z\7\p\x\j\9\e\f\m\k\7\j\c\s\m\s\y\p\o\w\x\7\v\r\d\g\3\o ]] 00:06:21.114 15:04:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.114 15:04:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:21.114 [2024-11-06 15:04:50.186567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.114 [2024-11-06 15:04:50.186711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58323 ] 00:06:21.114 [2024-11-06 15:04:50.316082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.114 [2024-11-06 15:04:50.366045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.373  [2024-11-06T15:04:50.648Z] Copying: 512/512 [B] (average 125 kBps) 00:06:21.373 00:06:21.373 15:04:50 -- dd/posix.sh@93 -- # [[ brh4q7l3wyqu95ooo1qbxlooe29c6u3ky368omr8stocl3j4z90xa99138ph5iwtyvlnje8mwmds5xy9gjf7mpyhnlbww8nok9vlovf2wj2jk4rj5q001c1lf8h059bxnae05o16zqt2b550rwgqqqcz2qcfpaknjc64x7l6ughm80cdf4sqorkdws8gelg3h6j26n9cm7m36tley6ve4q11h9qav6c78zmucutc2qn3l5fr2wt6bv252tp9bji9s47evnfc2pj9ru320k4wky97v6nv56cte8dspl080u2kmbmu7q15xx2lgwcbapjfzuofx3mb0xk791x44defazhj6kzh4qsw4viu31mpaltyshz93zstvjfj6nlk72yvokf35j2bb52xpdrkb81od544099ksbx8a2a30eq6fbm4cvn0oywb9duw7zfxbfzghe2cj0674unpewpqb6dback727570zz6ju1iz7pxj9efmk7jcsmsypowx7vrdg3o == \b\r\h\4\q\7\l\3\w\y\q\u\9\5\o\o\o\1\q\b\x\l\o\o\e\2\9\c\6\u\3\k\y\3\6\8\o\m\r\8\s\t\o\c\l\3\j\4\z\9\0\x\a\9\9\1\3\8\p\h\5\i\w\t\y\v\l\n\j\e\8\m\w\m\d\s\5\x\y\9\g\j\f\7\m\p\y\h\n\l\b\w\w\8\n\o\k\9\v\l\o\v\f\2\w\j\2\j\k\4\r\j\5\q\0\0\1\c\1\l\f\8\h\0\5\9\b\x\n\a\e\0\5\o\1\6\z\q\t\2\b\5\5\0\r\w\g\q\q\q\c\z\2\q\c\f\p\a\k\n\j\c\6\4\x\7\l\6\u\g\h\m\8\0\c\d\f\4\s\q\o\r\k\d\w\s\8\g\e\l\g\3\h\6\j\2\6\n\9\c\m\7\m\3\6\t\l\e\y\6\v\e\4\q\1\1\h\9\q\a\v\6\c\7\8\z\m\u\c\u\t\c\2\q\n\3\l\5\f\r\2\w\t\6\b\v\2\5\2\t\p\9\b\j\i\9\s\4\7\e\v\n\f\c\2\p\j\9\r\u\3\2\0\k\4\w\k\y\9\7\v\6\n\v\5\6\c\t\e\8\d\s\p\l\0\8\0\u\2\k\m\b\m\u\7\q\1\5\x\x\2\l\g\w\c\b\a\p\j\f\z\u\o\f\x\3\m\b\0\x\k\7\9\1\x\4\4\d\e\f\a\z\h\j\6\k\z\h\4\q\s\w\4\v\i\u\3\1\m\p\a\l\t\y\s\h\z\9\3\z\s\t\v\j\f\j\6\n\l\k\7\2\y\v\o\k\f\3\5\j\2\b\b\5\2\x\p\d\r\k\b\8\1\o\d\5\4\4\0\9\9\k\s\b\x\8\a\2\a\3\0\e\q\6\f\b\m\4\c\v\n\0\o\y\w\b\9\d\u\w\7\z\f\x\b\f\z\g\h\e\2\c\j\0\6\7\4\u\n\p\e\w\p\q\b\6\d\b\a\c\k\7\2\7\5\7\0\z\z\6\j\u\1\i\z\7\p\x\j\9\e\f\m\k\7\j\c\s\m\s\y\p\o\w\x\7\v\r\d\g\3\o ]] 00:06:21.373 15:04:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.373 15:04:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:21.373 [2024-11-06 15:04:50.639994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.373 [2024-11-06 15:04:50.640108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58330 ] 00:06:21.632 [2024-11-06 15:04:50.776195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.632 [2024-11-06 15:04:50.825038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.632  [2024-11-06T15:04:51.166Z] Copying: 512/512 [B] (average 250 kBps) 00:06:21.891 00:06:21.891 15:04:51 -- dd/posix.sh@93 -- # [[ brh4q7l3wyqu95ooo1qbxlooe29c6u3ky368omr8stocl3j4z90xa99138ph5iwtyvlnje8mwmds5xy9gjf7mpyhnlbww8nok9vlovf2wj2jk4rj5q001c1lf8h059bxnae05o16zqt2b550rwgqqqcz2qcfpaknjc64x7l6ughm80cdf4sqorkdws8gelg3h6j26n9cm7m36tley6ve4q11h9qav6c78zmucutc2qn3l5fr2wt6bv252tp9bji9s47evnfc2pj9ru320k4wky97v6nv56cte8dspl080u2kmbmu7q15xx2lgwcbapjfzuofx3mb0xk791x44defazhj6kzh4qsw4viu31mpaltyshz93zstvjfj6nlk72yvokf35j2bb52xpdrkb81od544099ksbx8a2a30eq6fbm4cvn0oywb9duw7zfxbfzghe2cj0674unpewpqb6dback727570zz6ju1iz7pxj9efmk7jcsmsypowx7vrdg3o == \b\r\h\4\q\7\l\3\w\y\q\u\9\5\o\o\o\1\q\b\x\l\o\o\e\2\9\c\6\u\3\k\y\3\6\8\o\m\r\8\s\t\o\c\l\3\j\4\z\9\0\x\a\9\9\1\3\8\p\h\5\i\w\t\y\v\l\n\j\e\8\m\w\m\d\s\5\x\y\9\g\j\f\7\m\p\y\h\n\l\b\w\w\8\n\o\k\9\v\l\o\v\f\2\w\j\2\j\k\4\r\j\5\q\0\0\1\c\1\l\f\8\h\0\5\9\b\x\n\a\e\0\5\o\1\6\z\q\t\2\b\5\5\0\r\w\g\q\q\q\c\z\2\q\c\f\p\a\k\n\j\c\6\4\x\7\l\6\u\g\h\m\8\0\c\d\f\4\s\q\o\r\k\d\w\s\8\g\e\l\g\3\h\6\j\2\6\n\9\c\m\7\m\3\6\t\l\e\y\6\v\e\4\q\1\1\h\9\q\a\v\6\c\7\8\z\m\u\c\u\t\c\2\q\n\3\l\5\f\r\2\w\t\6\b\v\2\5\2\t\p\9\b\j\i\9\s\4\7\e\v\n\f\c\2\p\j\9\r\u\3\2\0\k\4\w\k\y\9\7\v\6\n\v\5\6\c\t\e\8\d\s\p\l\0\8\0\u\2\k\m\b\m\u\7\q\1\5\x\x\2\l\g\w\c\b\a\p\j\f\z\u\o\f\x\3\m\b\0\x\k\7\9\1\x\4\4\d\e\f\a\z\h\j\6\k\z\h\4\q\s\w\4\v\i\u\3\1\m\p\a\l\t\y\s\h\z\9\3\z\s\t\v\j\f\j\6\n\l\k\7\2\y\v\o\k\f\3\5\j\2\b\b\5\2\x\p\d\r\k\b\8\1\o\d\5\4\4\0\9\9\k\s\b\x\8\a\2\a\3\0\e\q\6\f\b\m\4\c\v\n\0\o\y\w\b\9\d\u\w\7\z\f\x\b\f\z\g\h\e\2\c\j\0\6\7\4\u\n\p\e\w\p\q\b\6\d\b\a\c\k\7\2\7\5\7\0\z\z\6\j\u\1\i\z\7\p\x\j\9\e\f\m\k\7\j\c\s\m\s\y\p\o\w\x\7\v\r\d\g\3\o ]] 00:06:21.891 15:04:51 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:21.891 15:04:51 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:21.891 15:04:51 -- dd/common.sh@98 -- # xtrace_disable 00:06:21.891 15:04:51 -- common/autotest_common.sh@10 -- # set +x 00:06:21.891 15:04:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.891 15:04:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:21.891 [2024-11-06 15:04:51.098723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.891 [2024-11-06 15:04:51.098813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58333 ] 00:06:22.150 [2024-11-06 15:04:51.236020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.150 [2024-11-06 15:04:51.289332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.150  [2024-11-06T15:04:51.683Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.408 00:06:22.408 15:04:51 -- dd/posix.sh@93 -- # [[ g6n5cwj7tsgxrtf36r5jesmmpb4iw46tgany4unvwyva55rq2kgaxtlujnjyfohpincbuq4c5dpystm4tcz3kqa790mdyf3z0svg2jbycmh24htjfnvssplmyidq18p45ejzt9glzwb0h6phfl64h57fcg5vxdv6apes71ycq7nuvd1agucw5lag911tnqtngn8rp45j5wcq4yi47eqc5xmonn54hxh6bnult6lw37g2v91di77ke7bmu8zoql603w6f8j62oixn4jkron9ss5htpwxx6o1agaukew8zlp2xgrgmbi3ykq8bc4j2s7ddl96cgyohwi2r1h3s3oa1s4aru3ud1la1c4uo27phqft6fh1qu83hw4iwikwvutlcx93vdwfu4ydgton8f90ppfjmrx0332cgaozkv5hhvnm7t60u2krx8q09soll2zltrmw5v0ns46i7zgh6w0e7clmnermk2kligbw4gfrv46ueoous4te8okbhuxb0ljki == \g\6\n\5\c\w\j\7\t\s\g\x\r\t\f\3\6\r\5\j\e\s\m\m\p\b\4\i\w\4\6\t\g\a\n\y\4\u\n\v\w\y\v\a\5\5\r\q\2\k\g\a\x\t\l\u\j\n\j\y\f\o\h\p\i\n\c\b\u\q\4\c\5\d\p\y\s\t\m\4\t\c\z\3\k\q\a\7\9\0\m\d\y\f\3\z\0\s\v\g\2\j\b\y\c\m\h\2\4\h\t\j\f\n\v\s\s\p\l\m\y\i\d\q\1\8\p\4\5\e\j\z\t\9\g\l\z\w\b\0\h\6\p\h\f\l\6\4\h\5\7\f\c\g\5\v\x\d\v\6\a\p\e\s\7\1\y\c\q\7\n\u\v\d\1\a\g\u\c\w\5\l\a\g\9\1\1\t\n\q\t\n\g\n\8\r\p\4\5\j\5\w\c\q\4\y\i\4\7\e\q\c\5\x\m\o\n\n\5\4\h\x\h\6\b\n\u\l\t\6\l\w\3\7\g\2\v\9\1\d\i\7\7\k\e\7\b\m\u\8\z\o\q\l\6\0\3\w\6\f\8\j\6\2\o\i\x\n\4\j\k\r\o\n\9\s\s\5\h\t\p\w\x\x\6\o\1\a\g\a\u\k\e\w\8\z\l\p\2\x\g\r\g\m\b\i\3\y\k\q\8\b\c\4\j\2\s\7\d\d\l\9\6\c\g\y\o\h\w\i\2\r\1\h\3\s\3\o\a\1\s\4\a\r\u\3\u\d\1\l\a\1\c\4\u\o\2\7\p\h\q\f\t\6\f\h\1\q\u\8\3\h\w\4\i\w\i\k\w\v\u\t\l\c\x\9\3\v\d\w\f\u\4\y\d\g\t\o\n\8\f\9\0\p\p\f\j\m\r\x\0\3\3\2\c\g\a\o\z\k\v\5\h\h\v\n\m\7\t\6\0\u\2\k\r\x\8\q\0\9\s\o\l\l\2\z\l\t\r\m\w\5\v\0\n\s\4\6\i\7\z\g\h\6\w\0\e\7\c\l\m\n\e\r\m\k\2\k\l\i\g\b\w\4\g\f\r\v\4\6\u\e\o\o\u\s\4\t\e\8\o\k\b\h\u\x\b\0\l\j\k\i ]] 00:06:22.408 15:04:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.408 15:04:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.408 [2024-11-06 15:04:51.542465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.409 [2024-11-06 15:04:51.542547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58340 ] 00:06:22.409 [2024-11-06 15:04:51.669250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.667 [2024-11-06 15:04:51.719484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.668  [2024-11-06T15:04:51.943Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.668 00:06:22.668 15:04:51 -- dd/posix.sh@93 -- # [[ g6n5cwj7tsgxrtf36r5jesmmpb4iw46tgany4unvwyva55rq2kgaxtlujnjyfohpincbuq4c5dpystm4tcz3kqa790mdyf3z0svg2jbycmh24htjfnvssplmyidq18p45ejzt9glzwb0h6phfl64h57fcg5vxdv6apes71ycq7nuvd1agucw5lag911tnqtngn8rp45j5wcq4yi47eqc5xmonn54hxh6bnult6lw37g2v91di77ke7bmu8zoql603w6f8j62oixn4jkron9ss5htpwxx6o1agaukew8zlp2xgrgmbi3ykq8bc4j2s7ddl96cgyohwi2r1h3s3oa1s4aru3ud1la1c4uo27phqft6fh1qu83hw4iwikwvutlcx93vdwfu4ydgton8f90ppfjmrx0332cgaozkv5hhvnm7t60u2krx8q09soll2zltrmw5v0ns46i7zgh6w0e7clmnermk2kligbw4gfrv46ueoous4te8okbhuxb0ljki == \g\6\n\5\c\w\j\7\t\s\g\x\r\t\f\3\6\r\5\j\e\s\m\m\p\b\4\i\w\4\6\t\g\a\n\y\4\u\n\v\w\y\v\a\5\5\r\q\2\k\g\a\x\t\l\u\j\n\j\y\f\o\h\p\i\n\c\b\u\q\4\c\5\d\p\y\s\t\m\4\t\c\z\3\k\q\a\7\9\0\m\d\y\f\3\z\0\s\v\g\2\j\b\y\c\m\h\2\4\h\t\j\f\n\v\s\s\p\l\m\y\i\d\q\1\8\p\4\5\e\j\z\t\9\g\l\z\w\b\0\h\6\p\h\f\l\6\4\h\5\7\f\c\g\5\v\x\d\v\6\a\p\e\s\7\1\y\c\q\7\n\u\v\d\1\a\g\u\c\w\5\l\a\g\9\1\1\t\n\q\t\n\g\n\8\r\p\4\5\j\5\w\c\q\4\y\i\4\7\e\q\c\5\x\m\o\n\n\5\4\h\x\h\6\b\n\u\l\t\6\l\w\3\7\g\2\v\9\1\d\i\7\7\k\e\7\b\m\u\8\z\o\q\l\6\0\3\w\6\f\8\j\6\2\o\i\x\n\4\j\k\r\o\n\9\s\s\5\h\t\p\w\x\x\6\o\1\a\g\a\u\k\e\w\8\z\l\p\2\x\g\r\g\m\b\i\3\y\k\q\8\b\c\4\j\2\s\7\d\d\l\9\6\c\g\y\o\h\w\i\2\r\1\h\3\s\3\o\a\1\s\4\a\r\u\3\u\d\1\l\a\1\c\4\u\o\2\7\p\h\q\f\t\6\f\h\1\q\u\8\3\h\w\4\i\w\i\k\w\v\u\t\l\c\x\9\3\v\d\w\f\u\4\y\d\g\t\o\n\8\f\9\0\p\p\f\j\m\r\x\0\3\3\2\c\g\a\o\z\k\v\5\h\h\v\n\m\7\t\6\0\u\2\k\r\x\8\q\0\9\s\o\l\l\2\z\l\t\r\m\w\5\v\0\n\s\4\6\i\7\z\g\h\6\w\0\e\7\c\l\m\n\e\r\m\k\2\k\l\i\g\b\w\4\g\f\r\v\4\6\u\e\o\o\u\s\4\t\e\8\o\k\b\h\u\x\b\0\l\j\k\i ]] 00:06:22.668 15:04:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.668 15:04:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:22.927 [2024-11-06 15:04:51.970241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.927 [2024-11-06 15:04:51.970344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58348 ] 00:06:22.927 [2024-11-06 15:04:52.096990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.927 [2024-11-06 15:04:52.146042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.927  [2024-11-06T15:04:52.460Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.185 00:06:23.185 15:04:52 -- dd/posix.sh@93 -- # [[ g6n5cwj7tsgxrtf36r5jesmmpb4iw46tgany4unvwyva55rq2kgaxtlujnjyfohpincbuq4c5dpystm4tcz3kqa790mdyf3z0svg2jbycmh24htjfnvssplmyidq18p45ejzt9glzwb0h6phfl64h57fcg5vxdv6apes71ycq7nuvd1agucw5lag911tnqtngn8rp45j5wcq4yi47eqc5xmonn54hxh6bnult6lw37g2v91di77ke7bmu8zoql603w6f8j62oixn4jkron9ss5htpwxx6o1agaukew8zlp2xgrgmbi3ykq8bc4j2s7ddl96cgyohwi2r1h3s3oa1s4aru3ud1la1c4uo27phqft6fh1qu83hw4iwikwvutlcx93vdwfu4ydgton8f90ppfjmrx0332cgaozkv5hhvnm7t60u2krx8q09soll2zltrmw5v0ns46i7zgh6w0e7clmnermk2kligbw4gfrv46ueoous4te8okbhuxb0ljki == \g\6\n\5\c\w\j\7\t\s\g\x\r\t\f\3\6\r\5\j\e\s\m\m\p\b\4\i\w\4\6\t\g\a\n\y\4\u\n\v\w\y\v\a\5\5\r\q\2\k\g\a\x\t\l\u\j\n\j\y\f\o\h\p\i\n\c\b\u\q\4\c\5\d\p\y\s\t\m\4\t\c\z\3\k\q\a\7\9\0\m\d\y\f\3\z\0\s\v\g\2\j\b\y\c\m\h\2\4\h\t\j\f\n\v\s\s\p\l\m\y\i\d\q\1\8\p\4\5\e\j\z\t\9\g\l\z\w\b\0\h\6\p\h\f\l\6\4\h\5\7\f\c\g\5\v\x\d\v\6\a\p\e\s\7\1\y\c\q\7\n\u\v\d\1\a\g\u\c\w\5\l\a\g\9\1\1\t\n\q\t\n\g\n\8\r\p\4\5\j\5\w\c\q\4\y\i\4\7\e\q\c\5\x\m\o\n\n\5\4\h\x\h\6\b\n\u\l\t\6\l\w\3\7\g\2\v\9\1\d\i\7\7\k\e\7\b\m\u\8\z\o\q\l\6\0\3\w\6\f\8\j\6\2\o\i\x\n\4\j\k\r\o\n\9\s\s\5\h\t\p\w\x\x\6\o\1\a\g\a\u\k\e\w\8\z\l\p\2\x\g\r\g\m\b\i\3\y\k\q\8\b\c\4\j\2\s\7\d\d\l\9\6\c\g\y\o\h\w\i\2\r\1\h\3\s\3\o\a\1\s\4\a\r\u\3\u\d\1\l\a\1\c\4\u\o\2\7\p\h\q\f\t\6\f\h\1\q\u\8\3\h\w\4\i\w\i\k\w\v\u\t\l\c\x\9\3\v\d\w\f\u\4\y\d\g\t\o\n\8\f\9\0\p\p\f\j\m\r\x\0\3\3\2\c\g\a\o\z\k\v\5\h\h\v\n\m\7\t\6\0\u\2\k\r\x\8\q\0\9\s\o\l\l\2\z\l\t\r\m\w\5\v\0\n\s\4\6\i\7\z\g\h\6\w\0\e\7\c\l\m\n\e\r\m\k\2\k\l\i\g\b\w\4\g\f\r\v\4\6\u\e\o\o\u\s\4\t\e\8\o\k\b\h\u\x\b\0\l\j\k\i ]] 00:06:23.185 15:04:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.185 15:04:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.185 [2024-11-06 15:04:52.408226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.185 [2024-11-06 15:04:52.408311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58355 ] 00:06:23.444 [2024-11-06 15:04:52.530660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.444 [2024-11-06 15:04:52.579968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.444  [2024-11-06T15:04:52.978Z] Copying: 512/512 [B] (average 250 kBps) 00:06:23.703 00:06:23.703 ************************************ 00:06:23.703 END TEST dd_flags_misc 00:06:23.703 ************************************ 00:06:23.703 15:04:52 -- dd/posix.sh@93 -- # [[ g6n5cwj7tsgxrtf36r5jesmmpb4iw46tgany4unvwyva55rq2kgaxtlujnjyfohpincbuq4c5dpystm4tcz3kqa790mdyf3z0svg2jbycmh24htjfnvssplmyidq18p45ejzt9glzwb0h6phfl64h57fcg5vxdv6apes71ycq7nuvd1agucw5lag911tnqtngn8rp45j5wcq4yi47eqc5xmonn54hxh6bnult6lw37g2v91di77ke7bmu8zoql603w6f8j62oixn4jkron9ss5htpwxx6o1agaukew8zlp2xgrgmbi3ykq8bc4j2s7ddl96cgyohwi2r1h3s3oa1s4aru3ud1la1c4uo27phqft6fh1qu83hw4iwikwvutlcx93vdwfu4ydgton8f90ppfjmrx0332cgaozkv5hhvnm7t60u2krx8q09soll2zltrmw5v0ns46i7zgh6w0e7clmnermk2kligbw4gfrv46ueoous4te8okbhuxb0ljki == \g\6\n\5\c\w\j\7\t\s\g\x\r\t\f\3\6\r\5\j\e\s\m\m\p\b\4\i\w\4\6\t\g\a\n\y\4\u\n\v\w\y\v\a\5\5\r\q\2\k\g\a\x\t\l\u\j\n\j\y\f\o\h\p\i\n\c\b\u\q\4\c\5\d\p\y\s\t\m\4\t\c\z\3\k\q\a\7\9\0\m\d\y\f\3\z\0\s\v\g\2\j\b\y\c\m\h\2\4\h\t\j\f\n\v\s\s\p\l\m\y\i\d\q\1\8\p\4\5\e\j\z\t\9\g\l\z\w\b\0\h\6\p\h\f\l\6\4\h\5\7\f\c\g\5\v\x\d\v\6\a\p\e\s\7\1\y\c\q\7\n\u\v\d\1\a\g\u\c\w\5\l\a\g\9\1\1\t\n\q\t\n\g\n\8\r\p\4\5\j\5\w\c\q\4\y\i\4\7\e\q\c\5\x\m\o\n\n\5\4\h\x\h\6\b\n\u\l\t\6\l\w\3\7\g\2\v\9\1\d\i\7\7\k\e\7\b\m\u\8\z\o\q\l\6\0\3\w\6\f\8\j\6\2\o\i\x\n\4\j\k\r\o\n\9\s\s\5\h\t\p\w\x\x\6\o\1\a\g\a\u\k\e\w\8\z\l\p\2\x\g\r\g\m\b\i\3\y\k\q\8\b\c\4\j\2\s\7\d\d\l\9\6\c\g\y\o\h\w\i\2\r\1\h\3\s\3\o\a\1\s\4\a\r\u\3\u\d\1\l\a\1\c\4\u\o\2\7\p\h\q\f\t\6\f\h\1\q\u\8\3\h\w\4\i\w\i\k\w\v\u\t\l\c\x\9\3\v\d\w\f\u\4\y\d\g\t\o\n\8\f\9\0\p\p\f\j\m\r\x\0\3\3\2\c\g\a\o\z\k\v\5\h\h\v\n\m\7\t\6\0\u\2\k\r\x\8\q\0\9\s\o\l\l\2\z\l\t\r\m\w\5\v\0\n\s\4\6\i\7\z\g\h\6\w\0\e\7\c\l\m\n\e\r\m\k\2\k\l\i\g\b\w\4\g\f\r\v\4\6\u\e\o\o\u\s\4\t\e\8\o\k\b\h\u\x\b\0\l\j\k\i ]] 00:06:23.703 00:06:23.703 real 0m3.573s 00:06:23.703 user 0m1.930s 00:06:23.703 sys 0m0.673s 00:06:23.703 15:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.703 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:23.703 15:04:52 -- dd/posix.sh@131 -- # tests_forced_aio 00:06:23.703 15:04:52 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:23.703 * Second test run, disabling liburing, forcing AIO 00:06:23.703 15:04:52 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:23.703 15:04:52 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:23.703 15:04:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.703 15:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.703 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:23.703 ************************************ 00:06:23.703 START TEST dd_flag_append_forced_aio 00:06:23.703 ************************************ 00:06:23.703 15:04:52 -- common/autotest_common.sh@1114 -- # append 00:06:23.703 15:04:52 -- dd/posix.sh@16 -- # local dump0 00:06:23.703 15:04:52 -- dd/posix.sh@17 -- # local dump1 00:06:23.703 15:04:52 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:23.703 15:04:52 -- dd/common.sh@98 -- # xtrace_disable 00:06:23.703 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:23.703 15:04:52 -- dd/posix.sh@19 -- # dump0=tjf57p39t2wwqjnw4st7lte2knz88yh8 00:06:23.703 15:04:52 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:23.703 15:04:52 -- dd/common.sh@98 -- # xtrace_disable 00:06:23.703 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:23.703 15:04:52 -- dd/posix.sh@20 -- # dump1=wqltlwjkez43g4pw8z5cz0bf3odoiph1 00:06:23.703 15:04:52 -- dd/posix.sh@22 -- # printf %s tjf57p39t2wwqjnw4st7lte2knz88yh8 00:06:23.703 15:04:52 -- dd/posix.sh@23 -- # printf %s wqltlwjkez43g4pw8z5cz0bf3odoiph1 00:06:23.703 15:04:52 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:23.703 [2024-11-06 15:04:52.919556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.703 [2024-11-06 15:04:52.919648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58382 ] 00:06:23.962 [2024-11-06 15:04:53.053330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.962 [2024-11-06 15:04:53.102348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.962  [2024-11-06T15:04:53.496Z] Copying: 32/32 [B] (average 31 kBps) 00:06:24.221 00:06:24.221 15:04:53 -- dd/posix.sh@27 -- # [[ wqltlwjkez43g4pw8z5cz0bf3odoiph1tjf57p39t2wwqjnw4st7lte2knz88yh8 == \w\q\l\t\l\w\j\k\e\z\4\3\g\4\p\w\8\z\5\c\z\0\b\f\3\o\d\o\i\p\h\1\t\j\f\5\7\p\3\9\t\2\w\w\q\j\n\w\4\s\t\7\l\t\e\2\k\n\z\8\8\y\h\8 ]] 00:06:24.221 00:06:24.221 real 0m0.469s 00:06:24.221 user 0m0.249s 00:06:24.221 sys 0m0.099s 00:06:24.221 15:04:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.221 ************************************ 00:06:24.221 END TEST dd_flag_append_forced_aio 00:06:24.221 ************************************ 00:06:24.221 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:06:24.221 15:04:53 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:24.221 15:04:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.221 15:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.221 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:06:24.221 ************************************ 00:06:24.221 START TEST dd_flag_directory_forced_aio 00:06:24.221 ************************************ 00:06:24.221 15:04:53 -- common/autotest_common.sh@1114 -- # directory 00:06:24.221 15:04:53 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.221 15:04:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.221 15:04:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.221 15:04:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.221 15:04:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.221 15:04:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.221 15:04:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.221 15:04:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.221 15:04:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.221 15:04:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.221 15:04:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:24.221 15:04:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.221 [2024-11-06 15:04:53.433510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.221 [2024-11-06 15:04:53.433790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:06:24.490 [2024-11-06 15:04:53.565611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.490 [2024-11-06 15:04:53.614629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.490 [2024-11-06 15:04:53.659936] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:24.490 [2024-11-06 15:04:53.660027] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:24.490 [2024-11-06 15:04:53.660056] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.490 [2024-11-06 15:04:53.726010] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:24.795 15:04:53 -- common/autotest_common.sh@653 -- # es=236 00:06:24.795 15:04:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.795 15:04:53 -- common/autotest_common.sh@662 -- # es=108 00:06:24.795 15:04:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:24.795 15:04:53 -- common/autotest_common.sh@670 -- # es=1 00:06:24.795 15:04:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.795 15:04:53 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:24.795 15:04:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.795 15:04:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:24.795 15:04:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.795 15:04:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.795 15:04:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.795 15:04:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.795 15:04:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.795 15:04:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.795 15:04:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.795 15:04:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:24.795 15:04:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:24.795 [2024-11-06 15:04:53.883944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.795 [2024-11-06 15:04:53.884193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58418 ] 00:06:24.795 [2024-11-06 15:04:54.019524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.070 [2024-11-06 15:04:54.073891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.070 [2024-11-06 15:04:54.121060] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:25.070 [2024-11-06 15:04:54.121113] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:25.070 [2024-11-06 15:04:54.121143] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.070 [2024-11-06 15:04:54.180744] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:25.070 15:04:54 -- common/autotest_common.sh@653 -- # es=236 00:06:25.070 15:04:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.070 15:04:54 -- common/autotest_common.sh@662 -- # es=108 00:06:25.070 15:04:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:25.070 15:04:54 -- common/autotest_common.sh@670 -- # es=1 00:06:25.070 15:04:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.070 00:06:25.070 real 0m0.895s 00:06:25.070 user 0m0.495s 00:06:25.070 sys 0m0.193s 00:06:25.070 ************************************ 00:06:25.070 END TEST dd_flag_directory_forced_aio 00:06:25.070 ************************************ 00:06:25.070 15:04:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.070 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:06:25.070 15:04:54 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:25.070 15:04:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.070 15:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.070 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:06:25.070 ************************************ 00:06:25.070 START TEST dd_flag_nofollow_forced_aio 00:06:25.070 ************************************ 00:06:25.070 15:04:54 -- common/autotest_common.sh@1114 -- # nofollow 00:06:25.070 15:04:54 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:25.070 15:04:54 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:25.070 15:04:54 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:25.070 15:04:54 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:25.070 15:04:54 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.070 15:04:54 -- common/autotest_common.sh@650 -- # local es=0 00:06:25.070 15:04:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.070 15:04:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.343 15:04:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.343 15:04:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.343 15:04:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.343 15:04:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.343 15:04:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.343 15:04:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.343 15:04:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.343 15:04:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.343 [2024-11-06 15:04:54.395256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.343 [2024-11-06 15:04:54.395358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58445 ] 00:06:25.343 [2024-11-06 15:04:54.532801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.343 [2024-11-06 15:04:54.582870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.602 [2024-11-06 15:04:54.628774] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:25.602 [2024-11-06 15:04:54.628824] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:25.602 [2024-11-06 15:04:54.628854] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.602 [2024-11-06 15:04:54.692055] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:25.602 15:04:54 -- common/autotest_common.sh@653 -- # es=216 00:06:25.602 15:04:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.602 15:04:54 -- common/autotest_common.sh@662 -- # es=88 00:06:25.602 15:04:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:25.602 15:04:54 -- common/autotest_common.sh@670 -- # es=1 00:06:25.602 15:04:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.602 15:04:54 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:25.602 15:04:54 -- common/autotest_common.sh@650 -- # local es=0 00:06:25.602 15:04:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:25.602 15:04:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.602 15:04:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.602 15:04:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.602 15:04:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.602 15:04:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.602 15:04:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.602 15:04:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.602 15:04:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.602 15:04:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:25.602 [2024-11-06 15:04:54.851069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.602 [2024-11-06 15:04:54.851161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58456 ] 00:06:25.861 [2024-11-06 15:04:54.987652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.861 [2024-11-06 15:04:55.037020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.861 [2024-11-06 15:04:55.081465] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:25.861 [2024-11-06 15:04:55.081518] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:25.861 [2024-11-06 15:04:55.081548] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.120 [2024-11-06 15:04:55.143266] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:26.120 15:04:55 -- common/autotest_common.sh@653 -- # es=216 00:06:26.120 15:04:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.120 15:04:55 -- common/autotest_common.sh@662 -- # es=88 00:06:26.120 15:04:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.120 15:04:55 -- common/autotest_common.sh@670 -- # es=1 00:06:26.120 15:04:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.120 15:04:55 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:26.120 15:04:55 -- dd/common.sh@98 -- # xtrace_disable 00:06:26.120 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.120 15:04:55 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.120 [2024-11-06 15:04:55.286969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.120 [2024-11-06 15:04:55.287051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58458 ] 00:06:26.378 [2024-11-06 15:04:55.413127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.378 [2024-11-06 15:04:55.469000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.378  [2024-11-06T15:04:55.913Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.638 00:06:26.638 15:04:55 -- dd/posix.sh@49 -- # [[ da1flbvkcntwez5jvhboyds3bwbvqwnpl6a8fquvr3pvdsk8rvi1o3ai3awtysxnez02vfb0eqp4lgppxw6a85thgacc7pk6ckmyuzqplvggpmnxi1nb1gkbgawytm1xmdyvd9i7rw9er134k1r0tpdl0jfqn7rq9ly2iq9fqulyt8d73v926r5x0fy9sevo7pe6ajdx00gjv8ns5e9rjkkhv9n2ied5p7kj8cbwdozjw1qq9cq96ubxrg5aos6anzmt5k1qn14ci21114sb3uwyq96vsrud4yxwnaqxmmv6ibbi8dxyi8bdufx4cky2jms0mwcrolphny0ztnglsa3nj0rafmkw8wzb0v7gf5c8wwwh16f7uafmhox4dz191lglpce0fntdnlsjjse7zx22r7xnjh8zhgymo731vc3i0f5c4gtb6jd3gybmh8vho9ju4poz87ya1u7t6p98wsnlf6bvzqt4a4f1n8j1dvmknhs6zfvdic7yq6nyhcb0 == \d\a\1\f\l\b\v\k\c\n\t\w\e\z\5\j\v\h\b\o\y\d\s\3\b\w\b\v\q\w\n\p\l\6\a\8\f\q\u\v\r\3\p\v\d\s\k\8\r\v\i\1\o\3\a\i\3\a\w\t\y\s\x\n\e\z\0\2\v\f\b\0\e\q\p\4\l\g\p\p\x\w\6\a\8\5\t\h\g\a\c\c\7\p\k\6\c\k\m\y\u\z\q\p\l\v\g\g\p\m\n\x\i\1\n\b\1\g\k\b\g\a\w\y\t\m\1\x\m\d\y\v\d\9\i\7\r\w\9\e\r\1\3\4\k\1\r\0\t\p\d\l\0\j\f\q\n\7\r\q\9\l\y\2\i\q\9\f\q\u\l\y\t\8\d\7\3\v\9\2\6\r\5\x\0\f\y\9\s\e\v\o\7\p\e\6\a\j\d\x\0\0\g\j\v\8\n\s\5\e\9\r\j\k\k\h\v\9\n\2\i\e\d\5\p\7\k\j\8\c\b\w\d\o\z\j\w\1\q\q\9\c\q\9\6\u\b\x\r\g\5\a\o\s\6\a\n\z\m\t\5\k\1\q\n\1\4\c\i\2\1\1\1\4\s\b\3\u\w\y\q\9\6\v\s\r\u\d\4\y\x\w\n\a\q\x\m\m\v\6\i\b\b\i\8\d\x\y\i\8\b\d\u\f\x\4\c\k\y\2\j\m\s\0\m\w\c\r\o\l\p\h\n\y\0\z\t\n\g\l\s\a\3\n\j\0\r\a\f\m\k\w\8\w\z\b\0\v\7\g\f\5\c\8\w\w\w\h\1\6\f\7\u\a\f\m\h\o\x\4\d\z\1\9\1\l\g\l\p\c\e\0\f\n\t\d\n\l\s\j\j\s\e\7\z\x\2\2\r\7\x\n\j\h\8\z\h\g\y\m\o\7\3\1\v\c\3\i\0\f\5\c\4\g\t\b\6\j\d\3\g\y\b\m\h\8\v\h\o\9\j\u\4\p\o\z\8\7\y\a\1\u\7\t\6\p\9\8\w\s\n\l\f\6\b\v\z\q\t\4\a\4\f\1\n\8\j\1\d\v\m\k\n\h\s\6\z\f\v\d\i\c\7\y\q\6\n\y\h\c\b\0 ]] 00:06:26.638 00:06:26.638 real 0m1.363s 00:06:26.638 user 0m0.778s 00:06:26.638 sys 0m0.255s 00:06:26.638 15:04:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.638 ************************************ 00:06:26.638 END TEST dd_flag_nofollow_forced_aio 00:06:26.638 ************************************ 00:06:26.638 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.638 15:04:55 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:26.638 15:04:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.638 15:04:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.638 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.638 ************************************ 00:06:26.638 START TEST dd_flag_noatime_forced_aio 00:06:26.638 ************************************ 00:06:26.638 15:04:55 -- common/autotest_common.sh@1114 -- # noatime 00:06:26.638 15:04:55 -- dd/posix.sh@53 -- # local atime_if 00:06:26.638 15:04:55 -- dd/posix.sh@54 -- # local atime_of 00:06:26.638 15:04:55 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:26.638 15:04:55 -- dd/common.sh@98 -- # xtrace_disable 00:06:26.638 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.638 15:04:55 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.638 15:04:55 -- dd/posix.sh@60 -- # atime_if=1730905495 00:06:26.638 15:04:55 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.638 15:04:55 -- dd/posix.sh@61 -- # atime_of=1730905495 00:06:26.638 15:04:55 -- dd/posix.sh@66 -- # sleep 1 00:06:27.574 15:04:56 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.574 [2024-11-06 15:04:56.810390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.574 [2024-11-06 15:04:56.810478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58504 ] 00:06:27.833 [2024-11-06 15:04:56.933089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.833 [2024-11-06 15:04:56.982816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.833  [2024-11-06T15:04:57.366Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.091 00:06:28.091 15:04:57 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.091 15:04:57 -- dd/posix.sh@69 -- # (( atime_if == 1730905495 )) 00:06:28.091 15:04:57 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.091 15:04:57 -- dd/posix.sh@70 -- # (( atime_of == 1730905495 )) 00:06:28.091 15:04:57 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.091 [2024-11-06 15:04:57.263987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.091 [2024-11-06 15:04:57.264078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58510 ] 00:06:28.351 [2024-11-06 15:04:57.399369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.351 [2024-11-06 15:04:57.449159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.351  [2024-11-06T15:04:57.885Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.610 00:06:28.610 15:04:57 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.610 15:04:57 -- dd/posix.sh@73 -- # (( atime_if < 1730905497 )) 00:06:28.610 00:06:28.610 real 0m1.935s 00:06:28.610 user 0m0.497s 00:06:28.610 sys 0m0.197s 00:06:28.610 15:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.610 ************************************ 00:06:28.610 END TEST dd_flag_noatime_forced_aio 00:06:28.610 ************************************ 00:06:28.610 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.610 15:04:57 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:28.610 15:04:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.610 15:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.610 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.610 ************************************ 00:06:28.610 START TEST dd_flags_misc_forced_aio 00:06:28.610 ************************************ 00:06:28.610 15:04:57 -- common/autotest_common.sh@1114 -- # io 00:06:28.610 15:04:57 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:28.610 15:04:57 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:28.610 15:04:57 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:28.610 15:04:57 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:28.610 15:04:57 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:28.610 15:04:57 -- dd/common.sh@98 -- # xtrace_disable 00:06:28.610 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.610 15:04:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.610 15:04:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:28.610 [2024-11-06 15:04:57.793676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.610 [2024-11-06 15:04:57.793933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58542 ] 00:06:28.869 [2024-11-06 15:04:57.931900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.869 [2024-11-06 15:04:57.982419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.869  [2024-11-06T15:04:58.403Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.128 00:06:29.128 15:04:58 -- dd/posix.sh@93 -- # [[ osty09iiuhx4vg9yazrf7mkxpxhnvdq2jvf954ust1itqasnk4fpcnk3lxyn3u7gth9v09r9twkm6qdlfi37fifqo87zseil7rdqhjbfkay8y4052w0zuqpepqh3emudz4szna8evlk7120i6ngztjrzyv22fkjyuxfg569poz5q638pcj1pgy20lo6e5wyltp4lbva1zc8qbc3l8f7uz7c4iprca9yagiwhqxqx5vp7l0m0z9n8xo2rd84de5qaokba6clp8462xn6qyoce86pb29gzpq7q6pfndx1fqtxt3afodrjq8f7vhddc3k9obqjiru43vae7t8wqzbn13wp9dxwhfaac04vygvr1g1yxhldmwnza73x1fy2uaq5g725dsafha6iexd5boo9m2g4wm0jfnyr78dxx8jnifmi0j6s01xruc1nmdnrn4c8ukn7zu6h705mfzpu3hwuuizea87m9tu3r4tzid5xg07tyccfo84ir3off8k7g1jdb == \o\s\t\y\0\9\i\i\u\h\x\4\v\g\9\y\a\z\r\f\7\m\k\x\p\x\h\n\v\d\q\2\j\v\f\9\5\4\u\s\t\1\i\t\q\a\s\n\k\4\f\p\c\n\k\3\l\x\y\n\3\u\7\g\t\h\9\v\0\9\r\9\t\w\k\m\6\q\d\l\f\i\3\7\f\i\f\q\o\8\7\z\s\e\i\l\7\r\d\q\h\j\b\f\k\a\y\8\y\4\0\5\2\w\0\z\u\q\p\e\p\q\h\3\e\m\u\d\z\4\s\z\n\a\8\e\v\l\k\7\1\2\0\i\6\n\g\z\t\j\r\z\y\v\2\2\f\k\j\y\u\x\f\g\5\6\9\p\o\z\5\q\6\3\8\p\c\j\1\p\g\y\2\0\l\o\6\e\5\w\y\l\t\p\4\l\b\v\a\1\z\c\8\q\b\c\3\l\8\f\7\u\z\7\c\4\i\p\r\c\a\9\y\a\g\i\w\h\q\x\q\x\5\v\p\7\l\0\m\0\z\9\n\8\x\o\2\r\d\8\4\d\e\5\q\a\o\k\b\a\6\c\l\p\8\4\6\2\x\n\6\q\y\o\c\e\8\6\p\b\2\9\g\z\p\q\7\q\6\p\f\n\d\x\1\f\q\t\x\t\3\a\f\o\d\r\j\q\8\f\7\v\h\d\d\c\3\k\9\o\b\q\j\i\r\u\4\3\v\a\e\7\t\8\w\q\z\b\n\1\3\w\p\9\d\x\w\h\f\a\a\c\0\4\v\y\g\v\r\1\g\1\y\x\h\l\d\m\w\n\z\a\7\3\x\1\f\y\2\u\a\q\5\g\7\2\5\d\s\a\f\h\a\6\i\e\x\d\5\b\o\o\9\m\2\g\4\w\m\0\j\f\n\y\r\7\8\d\x\x\8\j\n\i\f\m\i\0\j\6\s\0\1\x\r\u\c\1\n\m\d\n\r\n\4\c\8\u\k\n\7\z\u\6\h\7\0\5\m\f\z\p\u\3\h\w\u\u\i\z\e\a\8\7\m\9\t\u\3\r\4\t\z\i\d\5\x\g\0\7\t\y\c\c\f\o\8\4\i\r\3\o\f\f\8\k\7\g\1\j\d\b ]] 00:06:29.128 15:04:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.128 15:04:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:29.128 [2024-11-06 15:04:58.247460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.128 [2024-11-06 15:04:58.247601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58544 ] 00:06:29.128 [2024-11-06 15:04:58.380506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.386 [2024-11-06 15:04:58.430970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.386  [2024-11-06T15:04:58.661Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.386 00:06:29.645 15:04:58 -- dd/posix.sh@93 -- # [[ osty09iiuhx4vg9yazrf7mkxpxhnvdq2jvf954ust1itqasnk4fpcnk3lxyn3u7gth9v09r9twkm6qdlfi37fifqo87zseil7rdqhjbfkay8y4052w0zuqpepqh3emudz4szna8evlk7120i6ngztjrzyv22fkjyuxfg569poz5q638pcj1pgy20lo6e5wyltp4lbva1zc8qbc3l8f7uz7c4iprca9yagiwhqxqx5vp7l0m0z9n8xo2rd84de5qaokba6clp8462xn6qyoce86pb29gzpq7q6pfndx1fqtxt3afodrjq8f7vhddc3k9obqjiru43vae7t8wqzbn13wp9dxwhfaac04vygvr1g1yxhldmwnza73x1fy2uaq5g725dsafha6iexd5boo9m2g4wm0jfnyr78dxx8jnifmi0j6s01xruc1nmdnrn4c8ukn7zu6h705mfzpu3hwuuizea87m9tu3r4tzid5xg07tyccfo84ir3off8k7g1jdb == \o\s\t\y\0\9\i\i\u\h\x\4\v\g\9\y\a\z\r\f\7\m\k\x\p\x\h\n\v\d\q\2\j\v\f\9\5\4\u\s\t\1\i\t\q\a\s\n\k\4\f\p\c\n\k\3\l\x\y\n\3\u\7\g\t\h\9\v\0\9\r\9\t\w\k\m\6\q\d\l\f\i\3\7\f\i\f\q\o\8\7\z\s\e\i\l\7\r\d\q\h\j\b\f\k\a\y\8\y\4\0\5\2\w\0\z\u\q\p\e\p\q\h\3\e\m\u\d\z\4\s\z\n\a\8\e\v\l\k\7\1\2\0\i\6\n\g\z\t\j\r\z\y\v\2\2\f\k\j\y\u\x\f\g\5\6\9\p\o\z\5\q\6\3\8\p\c\j\1\p\g\y\2\0\l\o\6\e\5\w\y\l\t\p\4\l\b\v\a\1\z\c\8\q\b\c\3\l\8\f\7\u\z\7\c\4\i\p\r\c\a\9\y\a\g\i\w\h\q\x\q\x\5\v\p\7\l\0\m\0\z\9\n\8\x\o\2\r\d\8\4\d\e\5\q\a\o\k\b\a\6\c\l\p\8\4\6\2\x\n\6\q\y\o\c\e\8\6\p\b\2\9\g\z\p\q\7\q\6\p\f\n\d\x\1\f\q\t\x\t\3\a\f\o\d\r\j\q\8\f\7\v\h\d\d\c\3\k\9\o\b\q\j\i\r\u\4\3\v\a\e\7\t\8\w\q\z\b\n\1\3\w\p\9\d\x\w\h\f\a\a\c\0\4\v\y\g\v\r\1\g\1\y\x\h\l\d\m\w\n\z\a\7\3\x\1\f\y\2\u\a\q\5\g\7\2\5\d\s\a\f\h\a\6\i\e\x\d\5\b\o\o\9\m\2\g\4\w\m\0\j\f\n\y\r\7\8\d\x\x\8\j\n\i\f\m\i\0\j\6\s\0\1\x\r\u\c\1\n\m\d\n\r\n\4\c\8\u\k\n\7\z\u\6\h\7\0\5\m\f\z\p\u\3\h\w\u\u\i\z\e\a\8\7\m\9\t\u\3\r\4\t\z\i\d\5\x\g\0\7\t\y\c\c\f\o\8\4\i\r\3\o\f\f\8\k\7\g\1\j\d\b ]] 00:06:29.645 15:04:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.645 15:04:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:29.645 [2024-11-06 15:04:58.716336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.646 [2024-11-06 15:04:58.716431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58551 ] 00:06:29.646 [2024-11-06 15:04:58.852569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.646 [2024-11-06 15:04:58.901945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.904  [2024-11-06T15:04:59.179Z] Copying: 512/512 [B] (average 125 kBps) 00:06:29.904 00:06:29.904 15:04:59 -- dd/posix.sh@93 -- # [[ osty09iiuhx4vg9yazrf7mkxpxhnvdq2jvf954ust1itqasnk4fpcnk3lxyn3u7gth9v09r9twkm6qdlfi37fifqo87zseil7rdqhjbfkay8y4052w0zuqpepqh3emudz4szna8evlk7120i6ngztjrzyv22fkjyuxfg569poz5q638pcj1pgy20lo6e5wyltp4lbva1zc8qbc3l8f7uz7c4iprca9yagiwhqxqx5vp7l0m0z9n8xo2rd84de5qaokba6clp8462xn6qyoce86pb29gzpq7q6pfndx1fqtxt3afodrjq8f7vhddc3k9obqjiru43vae7t8wqzbn13wp9dxwhfaac04vygvr1g1yxhldmwnza73x1fy2uaq5g725dsafha6iexd5boo9m2g4wm0jfnyr78dxx8jnifmi0j6s01xruc1nmdnrn4c8ukn7zu6h705mfzpu3hwuuizea87m9tu3r4tzid5xg07tyccfo84ir3off8k7g1jdb == \o\s\t\y\0\9\i\i\u\h\x\4\v\g\9\y\a\z\r\f\7\m\k\x\p\x\h\n\v\d\q\2\j\v\f\9\5\4\u\s\t\1\i\t\q\a\s\n\k\4\f\p\c\n\k\3\l\x\y\n\3\u\7\g\t\h\9\v\0\9\r\9\t\w\k\m\6\q\d\l\f\i\3\7\f\i\f\q\o\8\7\z\s\e\i\l\7\r\d\q\h\j\b\f\k\a\y\8\y\4\0\5\2\w\0\z\u\q\p\e\p\q\h\3\e\m\u\d\z\4\s\z\n\a\8\e\v\l\k\7\1\2\0\i\6\n\g\z\t\j\r\z\y\v\2\2\f\k\j\y\u\x\f\g\5\6\9\p\o\z\5\q\6\3\8\p\c\j\1\p\g\y\2\0\l\o\6\e\5\w\y\l\t\p\4\l\b\v\a\1\z\c\8\q\b\c\3\l\8\f\7\u\z\7\c\4\i\p\r\c\a\9\y\a\g\i\w\h\q\x\q\x\5\v\p\7\l\0\m\0\z\9\n\8\x\o\2\r\d\8\4\d\e\5\q\a\o\k\b\a\6\c\l\p\8\4\6\2\x\n\6\q\y\o\c\e\8\6\p\b\2\9\g\z\p\q\7\q\6\p\f\n\d\x\1\f\q\t\x\t\3\a\f\o\d\r\j\q\8\f\7\v\h\d\d\c\3\k\9\o\b\q\j\i\r\u\4\3\v\a\e\7\t\8\w\q\z\b\n\1\3\w\p\9\d\x\w\h\f\a\a\c\0\4\v\y\g\v\r\1\g\1\y\x\h\l\d\m\w\n\z\a\7\3\x\1\f\y\2\u\a\q\5\g\7\2\5\d\s\a\f\h\a\6\i\e\x\d\5\b\o\o\9\m\2\g\4\w\m\0\j\f\n\y\r\7\8\d\x\x\8\j\n\i\f\m\i\0\j\6\s\0\1\x\r\u\c\1\n\m\d\n\r\n\4\c\8\u\k\n\7\z\u\6\h\7\0\5\m\f\z\p\u\3\h\w\u\u\i\z\e\a\8\7\m\9\t\u\3\r\4\t\z\i\d\5\x\g\0\7\t\y\c\c\f\o\8\4\i\r\3\o\f\f\8\k\7\g\1\j\d\b ]] 00:06:29.904 15:04:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.904 15:04:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:29.904 [2024-11-06 15:04:59.172159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.904 [2024-11-06 15:04:59.172252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58559 ] 00:06:30.163 [2024-11-06 15:04:59.307580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.163 [2024-11-06 15:04:59.356258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.163  [2024-11-06T15:04:59.697Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.422 00:06:30.422 15:04:59 -- dd/posix.sh@93 -- # [[ osty09iiuhx4vg9yazrf7mkxpxhnvdq2jvf954ust1itqasnk4fpcnk3lxyn3u7gth9v09r9twkm6qdlfi37fifqo87zseil7rdqhjbfkay8y4052w0zuqpepqh3emudz4szna8evlk7120i6ngztjrzyv22fkjyuxfg569poz5q638pcj1pgy20lo6e5wyltp4lbva1zc8qbc3l8f7uz7c4iprca9yagiwhqxqx5vp7l0m0z9n8xo2rd84de5qaokba6clp8462xn6qyoce86pb29gzpq7q6pfndx1fqtxt3afodrjq8f7vhddc3k9obqjiru43vae7t8wqzbn13wp9dxwhfaac04vygvr1g1yxhldmwnza73x1fy2uaq5g725dsafha6iexd5boo9m2g4wm0jfnyr78dxx8jnifmi0j6s01xruc1nmdnrn4c8ukn7zu6h705mfzpu3hwuuizea87m9tu3r4tzid5xg07tyccfo84ir3off8k7g1jdb == \o\s\t\y\0\9\i\i\u\h\x\4\v\g\9\y\a\z\r\f\7\m\k\x\p\x\h\n\v\d\q\2\j\v\f\9\5\4\u\s\t\1\i\t\q\a\s\n\k\4\f\p\c\n\k\3\l\x\y\n\3\u\7\g\t\h\9\v\0\9\r\9\t\w\k\m\6\q\d\l\f\i\3\7\f\i\f\q\o\8\7\z\s\e\i\l\7\r\d\q\h\j\b\f\k\a\y\8\y\4\0\5\2\w\0\z\u\q\p\e\p\q\h\3\e\m\u\d\z\4\s\z\n\a\8\e\v\l\k\7\1\2\0\i\6\n\g\z\t\j\r\z\y\v\2\2\f\k\j\y\u\x\f\g\5\6\9\p\o\z\5\q\6\3\8\p\c\j\1\p\g\y\2\0\l\o\6\e\5\w\y\l\t\p\4\l\b\v\a\1\z\c\8\q\b\c\3\l\8\f\7\u\z\7\c\4\i\p\r\c\a\9\y\a\g\i\w\h\q\x\q\x\5\v\p\7\l\0\m\0\z\9\n\8\x\o\2\r\d\8\4\d\e\5\q\a\o\k\b\a\6\c\l\p\8\4\6\2\x\n\6\q\y\o\c\e\8\6\p\b\2\9\g\z\p\q\7\q\6\p\f\n\d\x\1\f\q\t\x\t\3\a\f\o\d\r\j\q\8\f\7\v\h\d\d\c\3\k\9\o\b\q\j\i\r\u\4\3\v\a\e\7\t\8\w\q\z\b\n\1\3\w\p\9\d\x\w\h\f\a\a\c\0\4\v\y\g\v\r\1\g\1\y\x\h\l\d\m\w\n\z\a\7\3\x\1\f\y\2\u\a\q\5\g\7\2\5\d\s\a\f\h\a\6\i\e\x\d\5\b\o\o\9\m\2\g\4\w\m\0\j\f\n\y\r\7\8\d\x\x\8\j\n\i\f\m\i\0\j\6\s\0\1\x\r\u\c\1\n\m\d\n\r\n\4\c\8\u\k\n\7\z\u\6\h\7\0\5\m\f\z\p\u\3\h\w\u\u\i\z\e\a\8\7\m\9\t\u\3\r\4\t\z\i\d\5\x\g\0\7\t\y\c\c\f\o\8\4\i\r\3\o\f\f\8\k\7\g\1\j\d\b ]] 00:06:30.422 15:04:59 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:30.422 15:04:59 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:30.422 15:04:59 -- dd/common.sh@98 -- # xtrace_disable 00:06:30.422 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.422 15:04:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.422 15:04:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:30.422 [2024-11-06 15:04:59.639621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.422 [2024-11-06 15:04:59.639885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58561 ] 00:06:30.681 [2024-11-06 15:04:59.777647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.681 [2024-11-06 15:04:59.834226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.681  [2024-11-06T15:05:00.214Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.939 00:06:30.939 15:05:00 -- dd/posix.sh@93 -- # [[ 4us2qea31sbikm07wpa2rekwvpvk9p1kfermwku06vibexzkycrb4xjewh9xgrpfj8erwzj178d8e7akgmx6f1orab1rtm6lzictog0eqsnr5a06aa15emogl64ucabmcrmvm1lpf9wl2r5ng9tirw0pq0ol5emrtxs46ui8zixoj2qo4btjt12e1nlm4bcpz18wbvbvzzoepx7uaz92vuoh7u40ieozxp0t28c6l9rsni5bi7ldpv2y0e6ollwszzeqd3c7shuruebw5yg3j4apm6yq79qtct5z9nt8v8q0t82mb5uwj9s6i7g7i2ada6s8d50z8gwyy9rcrzyhee5qkw3ihpxb3o755foanbb6ldnhd47gyxl37ike6ts0f8yzikz9zrlkok18wwpygz02wc9o6sub26xhszjnyjg8dxyfj6qwuraefigxwfov61gdrumt2urvei6sagy9rs2ku9bgf78vyht0o5zerlg4krdjovssdzsja7y23uuo == \4\u\s\2\q\e\a\3\1\s\b\i\k\m\0\7\w\p\a\2\r\e\k\w\v\p\v\k\9\p\1\k\f\e\r\m\w\k\u\0\6\v\i\b\e\x\z\k\y\c\r\b\4\x\j\e\w\h\9\x\g\r\p\f\j\8\e\r\w\z\j\1\7\8\d\8\e\7\a\k\g\m\x\6\f\1\o\r\a\b\1\r\t\m\6\l\z\i\c\t\o\g\0\e\q\s\n\r\5\a\0\6\a\a\1\5\e\m\o\g\l\6\4\u\c\a\b\m\c\r\m\v\m\1\l\p\f\9\w\l\2\r\5\n\g\9\t\i\r\w\0\p\q\0\o\l\5\e\m\r\t\x\s\4\6\u\i\8\z\i\x\o\j\2\q\o\4\b\t\j\t\1\2\e\1\n\l\m\4\b\c\p\z\1\8\w\b\v\b\v\z\z\o\e\p\x\7\u\a\z\9\2\v\u\o\h\7\u\4\0\i\e\o\z\x\p\0\t\2\8\c\6\l\9\r\s\n\i\5\b\i\7\l\d\p\v\2\y\0\e\6\o\l\l\w\s\z\z\e\q\d\3\c\7\s\h\u\r\u\e\b\w\5\y\g\3\j\4\a\p\m\6\y\q\7\9\q\t\c\t\5\z\9\n\t\8\v\8\q\0\t\8\2\m\b\5\u\w\j\9\s\6\i\7\g\7\i\2\a\d\a\6\s\8\d\5\0\z\8\g\w\y\y\9\r\c\r\z\y\h\e\e\5\q\k\w\3\i\h\p\x\b\3\o\7\5\5\f\o\a\n\b\b\6\l\d\n\h\d\4\7\g\y\x\l\3\7\i\k\e\6\t\s\0\f\8\y\z\i\k\z\9\z\r\l\k\o\k\1\8\w\w\p\y\g\z\0\2\w\c\9\o\6\s\u\b\2\6\x\h\s\z\j\n\y\j\g\8\d\x\y\f\j\6\q\w\u\r\a\e\f\i\g\x\w\f\o\v\6\1\g\d\r\u\m\t\2\u\r\v\e\i\6\s\a\g\y\9\r\s\2\k\u\9\b\g\f\7\8\v\y\h\t\0\o\5\z\e\r\l\g\4\k\r\d\j\o\v\s\s\d\z\s\j\a\7\y\2\3\u\u\o ]] 00:06:30.939 15:05:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.939 15:05:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:30.939 [2024-11-06 15:05:00.094341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.939 [2024-11-06 15:05:00.094428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:06:31.198 [2024-11-06 15:05:00.217604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.198 [2024-11-06 15:05:00.267077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.198  [2024-11-06T15:05:00.732Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.457 00:06:31.457 15:05:00 -- dd/posix.sh@93 -- # [[ 4us2qea31sbikm07wpa2rekwvpvk9p1kfermwku06vibexzkycrb4xjewh9xgrpfj8erwzj178d8e7akgmx6f1orab1rtm6lzictog0eqsnr5a06aa15emogl64ucabmcrmvm1lpf9wl2r5ng9tirw0pq0ol5emrtxs46ui8zixoj2qo4btjt12e1nlm4bcpz18wbvbvzzoepx7uaz92vuoh7u40ieozxp0t28c6l9rsni5bi7ldpv2y0e6ollwszzeqd3c7shuruebw5yg3j4apm6yq79qtct5z9nt8v8q0t82mb5uwj9s6i7g7i2ada6s8d50z8gwyy9rcrzyhee5qkw3ihpxb3o755foanbb6ldnhd47gyxl37ike6ts0f8yzikz9zrlkok18wwpygz02wc9o6sub26xhszjnyjg8dxyfj6qwuraefigxwfov61gdrumt2urvei6sagy9rs2ku9bgf78vyht0o5zerlg4krdjovssdzsja7y23uuo == \4\u\s\2\q\e\a\3\1\s\b\i\k\m\0\7\w\p\a\2\r\e\k\w\v\p\v\k\9\p\1\k\f\e\r\m\w\k\u\0\6\v\i\b\e\x\z\k\y\c\r\b\4\x\j\e\w\h\9\x\g\r\p\f\j\8\e\r\w\z\j\1\7\8\d\8\e\7\a\k\g\m\x\6\f\1\o\r\a\b\1\r\t\m\6\l\z\i\c\t\o\g\0\e\q\s\n\r\5\a\0\6\a\a\1\5\e\m\o\g\l\6\4\u\c\a\b\m\c\r\m\v\m\1\l\p\f\9\w\l\2\r\5\n\g\9\t\i\r\w\0\p\q\0\o\l\5\e\m\r\t\x\s\4\6\u\i\8\z\i\x\o\j\2\q\o\4\b\t\j\t\1\2\e\1\n\l\m\4\b\c\p\z\1\8\w\b\v\b\v\z\z\o\e\p\x\7\u\a\z\9\2\v\u\o\h\7\u\4\0\i\e\o\z\x\p\0\t\2\8\c\6\l\9\r\s\n\i\5\b\i\7\l\d\p\v\2\y\0\e\6\o\l\l\w\s\z\z\e\q\d\3\c\7\s\h\u\r\u\e\b\w\5\y\g\3\j\4\a\p\m\6\y\q\7\9\q\t\c\t\5\z\9\n\t\8\v\8\q\0\t\8\2\m\b\5\u\w\j\9\s\6\i\7\g\7\i\2\a\d\a\6\s\8\d\5\0\z\8\g\w\y\y\9\r\c\r\z\y\h\e\e\5\q\k\w\3\i\h\p\x\b\3\o\7\5\5\f\o\a\n\b\b\6\l\d\n\h\d\4\7\g\y\x\l\3\7\i\k\e\6\t\s\0\f\8\y\z\i\k\z\9\z\r\l\k\o\k\1\8\w\w\p\y\g\z\0\2\w\c\9\o\6\s\u\b\2\6\x\h\s\z\j\n\y\j\g\8\d\x\y\f\j\6\q\w\u\r\a\e\f\i\g\x\w\f\o\v\6\1\g\d\r\u\m\t\2\u\r\v\e\i\6\s\a\g\y\9\r\s\2\k\u\9\b\g\f\7\8\v\y\h\t\0\o\5\z\e\r\l\g\4\k\r\d\j\o\v\s\s\d\z\s\j\a\7\y\2\3\u\u\o ]] 00:06:31.457 15:05:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.457 15:05:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:31.457 [2024-11-06 15:05:00.521778] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.457 [2024-11-06 15:05:00.521864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58576 ] 00:06:31.457 [2024-11-06 15:05:00.644478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.457 [2024-11-06 15:05:00.703814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.716  [2024-11-06T15:05:00.991Z] Copying: 512/512 [B] (average 250 kBps) 00:06:31.716 00:06:31.716 15:05:00 -- dd/posix.sh@93 -- # [[ 4us2qea31sbikm07wpa2rekwvpvk9p1kfermwku06vibexzkycrb4xjewh9xgrpfj8erwzj178d8e7akgmx6f1orab1rtm6lzictog0eqsnr5a06aa15emogl64ucabmcrmvm1lpf9wl2r5ng9tirw0pq0ol5emrtxs46ui8zixoj2qo4btjt12e1nlm4bcpz18wbvbvzzoepx7uaz92vuoh7u40ieozxp0t28c6l9rsni5bi7ldpv2y0e6ollwszzeqd3c7shuruebw5yg3j4apm6yq79qtct5z9nt8v8q0t82mb5uwj9s6i7g7i2ada6s8d50z8gwyy9rcrzyhee5qkw3ihpxb3o755foanbb6ldnhd47gyxl37ike6ts0f8yzikz9zrlkok18wwpygz02wc9o6sub26xhszjnyjg8dxyfj6qwuraefigxwfov61gdrumt2urvei6sagy9rs2ku9bgf78vyht0o5zerlg4krdjovssdzsja7y23uuo == \4\u\s\2\q\e\a\3\1\s\b\i\k\m\0\7\w\p\a\2\r\e\k\w\v\p\v\k\9\p\1\k\f\e\r\m\w\k\u\0\6\v\i\b\e\x\z\k\y\c\r\b\4\x\j\e\w\h\9\x\g\r\p\f\j\8\e\r\w\z\j\1\7\8\d\8\e\7\a\k\g\m\x\6\f\1\o\r\a\b\1\r\t\m\6\l\z\i\c\t\o\g\0\e\q\s\n\r\5\a\0\6\a\a\1\5\e\m\o\g\l\6\4\u\c\a\b\m\c\r\m\v\m\1\l\p\f\9\w\l\2\r\5\n\g\9\t\i\r\w\0\p\q\0\o\l\5\e\m\r\t\x\s\4\6\u\i\8\z\i\x\o\j\2\q\o\4\b\t\j\t\1\2\e\1\n\l\m\4\b\c\p\z\1\8\w\b\v\b\v\z\z\o\e\p\x\7\u\a\z\9\2\v\u\o\h\7\u\4\0\i\e\o\z\x\p\0\t\2\8\c\6\l\9\r\s\n\i\5\b\i\7\l\d\p\v\2\y\0\e\6\o\l\l\w\s\z\z\e\q\d\3\c\7\s\h\u\r\u\e\b\w\5\y\g\3\j\4\a\p\m\6\y\q\7\9\q\t\c\t\5\z\9\n\t\8\v\8\q\0\t\8\2\m\b\5\u\w\j\9\s\6\i\7\g\7\i\2\a\d\a\6\s\8\d\5\0\z\8\g\w\y\y\9\r\c\r\z\y\h\e\e\5\q\k\w\3\i\h\p\x\b\3\o\7\5\5\f\o\a\n\b\b\6\l\d\n\h\d\4\7\g\y\x\l\3\7\i\k\e\6\t\s\0\f\8\y\z\i\k\z\9\z\r\l\k\o\k\1\8\w\w\p\y\g\z\0\2\w\c\9\o\6\s\u\b\2\6\x\h\s\z\j\n\y\j\g\8\d\x\y\f\j\6\q\w\u\r\a\e\f\i\g\x\w\f\o\v\6\1\g\d\r\u\m\t\2\u\r\v\e\i\6\s\a\g\y\9\r\s\2\k\u\9\b\g\f\7\8\v\y\h\t\0\o\5\z\e\r\l\g\4\k\r\d\j\o\v\s\s\d\z\s\j\a\7\y\2\3\u\u\o ]] 00:06:31.716 15:05:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.716 15:05:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:31.716 [2024-11-06 15:05:00.974600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.716 [2024-11-06 15:05:00.974707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58589 ] 00:06:31.975 [2024-11-06 15:05:01.096718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.975 [2024-11-06 15:05:01.145805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.975  [2024-11-06T15:05:01.508Z] Copying: 512/512 [B] (average 250 kBps) 00:06:32.233 00:06:32.234 15:05:01 -- dd/posix.sh@93 -- # [[ 4us2qea31sbikm07wpa2rekwvpvk9p1kfermwku06vibexzkycrb4xjewh9xgrpfj8erwzj178d8e7akgmx6f1orab1rtm6lzictog0eqsnr5a06aa15emogl64ucabmcrmvm1lpf9wl2r5ng9tirw0pq0ol5emrtxs46ui8zixoj2qo4btjt12e1nlm4bcpz18wbvbvzzoepx7uaz92vuoh7u40ieozxp0t28c6l9rsni5bi7ldpv2y0e6ollwszzeqd3c7shuruebw5yg3j4apm6yq79qtct5z9nt8v8q0t82mb5uwj9s6i7g7i2ada6s8d50z8gwyy9rcrzyhee5qkw3ihpxb3o755foanbb6ldnhd47gyxl37ike6ts0f8yzikz9zrlkok18wwpygz02wc9o6sub26xhszjnyjg8dxyfj6qwuraefigxwfov61gdrumt2urvei6sagy9rs2ku9bgf78vyht0o5zerlg4krdjovssdzsja7y23uuo == \4\u\s\2\q\e\a\3\1\s\b\i\k\m\0\7\w\p\a\2\r\e\k\w\v\p\v\k\9\p\1\k\f\e\r\m\w\k\u\0\6\v\i\b\e\x\z\k\y\c\r\b\4\x\j\e\w\h\9\x\g\r\p\f\j\8\e\r\w\z\j\1\7\8\d\8\e\7\a\k\g\m\x\6\f\1\o\r\a\b\1\r\t\m\6\l\z\i\c\t\o\g\0\e\q\s\n\r\5\a\0\6\a\a\1\5\e\m\o\g\l\6\4\u\c\a\b\m\c\r\m\v\m\1\l\p\f\9\w\l\2\r\5\n\g\9\t\i\r\w\0\p\q\0\o\l\5\e\m\r\t\x\s\4\6\u\i\8\z\i\x\o\j\2\q\o\4\b\t\j\t\1\2\e\1\n\l\m\4\b\c\p\z\1\8\w\b\v\b\v\z\z\o\e\p\x\7\u\a\z\9\2\v\u\o\h\7\u\4\0\i\e\o\z\x\p\0\t\2\8\c\6\l\9\r\s\n\i\5\b\i\7\l\d\p\v\2\y\0\e\6\o\l\l\w\s\z\z\e\q\d\3\c\7\s\h\u\r\u\e\b\w\5\y\g\3\j\4\a\p\m\6\y\q\7\9\q\t\c\t\5\z\9\n\t\8\v\8\q\0\t\8\2\m\b\5\u\w\j\9\s\6\i\7\g\7\i\2\a\d\a\6\s\8\d\5\0\z\8\g\w\y\y\9\r\c\r\z\y\h\e\e\5\q\k\w\3\i\h\p\x\b\3\o\7\5\5\f\o\a\n\b\b\6\l\d\n\h\d\4\7\g\y\x\l\3\7\i\k\e\6\t\s\0\f\8\y\z\i\k\z\9\z\r\l\k\o\k\1\8\w\w\p\y\g\z\0\2\w\c\9\o\6\s\u\b\2\6\x\h\s\z\j\n\y\j\g\8\d\x\y\f\j\6\q\w\u\r\a\e\f\i\g\x\w\f\o\v\6\1\g\d\r\u\m\t\2\u\r\v\e\i\6\s\a\g\y\9\r\s\2\k\u\9\b\g\f\7\8\v\y\h\t\0\o\5\z\e\r\l\g\4\k\r\d\j\o\v\s\s\d\z\s\j\a\7\y\2\3\u\u\o ]] 00:06:32.234 00:06:32.234 real 0m3.631s 00:06:32.234 user 0m1.935s 00:06:32.234 sys 0m0.715s 00:06:32.234 15:05:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.234 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.234 ************************************ 00:06:32.234 END TEST dd_flags_misc_forced_aio 00:06:32.234 ************************************ 00:06:32.234 15:05:01 -- dd/posix.sh@1 -- # cleanup 00:06:32.234 15:05:01 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:32.234 15:05:01 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:32.234 00:06:32.234 real 0m17.374s 00:06:32.234 user 0m8.266s 00:06:32.234 sys 0m3.274s 00:06:32.234 15:05:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.234 ************************************ 00:06:32.234 END TEST spdk_dd_posix 00:06:32.234 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.234 ************************************ 00:06:32.234 15:05:01 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:32.234 15:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.234 15:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.234 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.234 ************************************ 00:06:32.234 START TEST spdk_dd_malloc 00:06:32.234 ************************************ 00:06:32.234 15:05:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:32.493 * Looking for test storage... 00:06:32.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:32.493 15:05:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:32.493 15:05:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:32.493 15:05:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:32.493 15:05:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:32.493 15:05:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:32.493 15:05:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:32.493 15:05:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:32.493 15:05:01 -- scripts/common.sh@335 -- # IFS=.-: 00:06:32.493 15:05:01 -- scripts/common.sh@335 -- # read -ra ver1 00:06:32.493 15:05:01 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.493 15:05:01 -- scripts/common.sh@336 -- # read -ra ver2 00:06:32.493 15:05:01 -- scripts/common.sh@337 -- # local 'op=<' 00:06:32.493 15:05:01 -- scripts/common.sh@339 -- # ver1_l=2 00:06:32.493 15:05:01 -- scripts/common.sh@340 -- # ver2_l=1 00:06:32.493 15:05:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:32.493 15:05:01 -- scripts/common.sh@343 -- # case "$op" in 00:06:32.493 15:05:01 -- scripts/common.sh@344 -- # : 1 00:06:32.493 15:05:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:32.493 15:05:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.493 15:05:01 -- scripts/common.sh@364 -- # decimal 1 00:06:32.493 15:05:01 -- scripts/common.sh@352 -- # local d=1 00:06:32.493 15:05:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.493 15:05:01 -- scripts/common.sh@354 -- # echo 1 00:06:32.493 15:05:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:32.493 15:05:01 -- scripts/common.sh@365 -- # decimal 2 00:06:32.493 15:05:01 -- scripts/common.sh@352 -- # local d=2 00:06:32.493 15:05:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.493 15:05:01 -- scripts/common.sh@354 -- # echo 2 00:06:32.493 15:05:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:32.493 15:05:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:32.493 15:05:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:32.493 15:05:01 -- scripts/common.sh@367 -- # return 0 00:06:32.493 15:05:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.493 15:05:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:32.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.493 --rc genhtml_branch_coverage=1 00:06:32.493 --rc genhtml_function_coverage=1 00:06:32.493 --rc genhtml_legend=1 00:06:32.493 --rc geninfo_all_blocks=1 00:06:32.493 --rc geninfo_unexecuted_blocks=1 00:06:32.493 00:06:32.493 ' 00:06:32.493 15:05:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:32.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.493 --rc genhtml_branch_coverage=1 00:06:32.493 --rc genhtml_function_coverage=1 00:06:32.493 --rc genhtml_legend=1 00:06:32.493 --rc geninfo_all_blocks=1 00:06:32.493 --rc geninfo_unexecuted_blocks=1 00:06:32.493 00:06:32.493 ' 00:06:32.493 15:05:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:32.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.493 --rc genhtml_branch_coverage=1 00:06:32.493 --rc genhtml_function_coverage=1 00:06:32.493 --rc genhtml_legend=1 00:06:32.493 --rc geninfo_all_blocks=1 00:06:32.493 --rc geninfo_unexecuted_blocks=1 00:06:32.493 00:06:32.493 ' 00:06:32.493 15:05:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:32.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.493 --rc genhtml_branch_coverage=1 00:06:32.493 --rc genhtml_function_coverage=1 00:06:32.493 --rc genhtml_legend=1 00:06:32.493 --rc geninfo_all_blocks=1 00:06:32.493 --rc geninfo_unexecuted_blocks=1 00:06:32.493 00:06:32.493 ' 00:06:32.493 15:05:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.493 15:05:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.493 15:05:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.493 15:05:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.493 15:05:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.494 15:05:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.494 15:05:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.494 15:05:01 -- paths/export.sh@5 -- # export PATH 00:06:32.494 15:05:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.494 15:05:01 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:32.494 15:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.494 15:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.494 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 ************************************ 00:06:32.494 START TEST dd_malloc_copy 00:06:32.494 ************************************ 00:06:32.494 15:05:01 -- common/autotest_common.sh@1114 -- # malloc_copy 00:06:32.494 15:05:01 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:32.494 15:05:01 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:32.494 15:05:01 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:32.494 15:05:01 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:32.494 15:05:01 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:32.494 15:05:01 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:32.494 15:05:01 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:32.494 15:05:01 -- dd/malloc.sh@28 -- # gen_conf 00:06:32.494 15:05:01 -- dd/common.sh@31 -- # xtrace_disable 00:06:32.494 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 [2024-11-06 15:05:01.691554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.494 [2024-11-06 15:05:01.691634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58659 ] 00:06:32.494 { 00:06:32.494 "subsystems": [ 00:06:32.494 { 00:06:32.494 "subsystem": "bdev", 00:06:32.494 "config": [ 00:06:32.494 { 00:06:32.494 "params": { 00:06:32.494 "block_size": 512, 00:06:32.494 "num_blocks": 1048576, 00:06:32.494 "name": "malloc0" 00:06:32.494 }, 00:06:32.494 "method": "bdev_malloc_create" 00:06:32.494 }, 00:06:32.494 { 00:06:32.494 "params": { 00:06:32.494 "block_size": 512, 00:06:32.494 "num_blocks": 1048576, 00:06:32.494 "name": "malloc1" 00:06:32.494 }, 00:06:32.494 "method": "bdev_malloc_create" 00:06:32.494 }, 00:06:32.494 { 00:06:32.494 "method": "bdev_wait_for_examine" 00:06:32.494 } 00:06:32.494 ] 00:06:32.494 } 00:06:32.494 ] 00:06:32.494 } 00:06:32.753 [2024-11-06 15:05:01.822109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.753 [2024-11-06 15:05:01.878864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.129  [2024-11-06T15:05:04.339Z] Copying: 244/512 [MB] (244 MBps) [2024-11-06T15:05:04.339Z] Copying: 490/512 [MB] (246 MBps) [2024-11-06T15:05:04.598Z] Copying: 512/512 [MB] (average 245 MBps) 00:06:35.323 00:06:35.323 15:05:04 -- dd/malloc.sh@33 -- # gen_conf 00:06:35.323 15:05:04 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:35.323 15:05:04 -- dd/common.sh@31 -- # xtrace_disable 00:06:35.323 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:06:35.323 [2024-11-06 15:05:04.587009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.323 [2024-11-06 15:05:04.587262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58701 ] 00:06:35.323 { 00:06:35.323 "subsystems": [ 00:06:35.323 { 00:06:35.323 "subsystem": "bdev", 00:06:35.323 "config": [ 00:06:35.323 { 00:06:35.323 "params": { 00:06:35.323 "block_size": 512, 00:06:35.323 "num_blocks": 1048576, 00:06:35.323 "name": "malloc0" 00:06:35.323 }, 00:06:35.323 "method": "bdev_malloc_create" 00:06:35.323 }, 00:06:35.323 { 00:06:35.323 "params": { 00:06:35.323 "block_size": 512, 00:06:35.323 "num_blocks": 1048576, 00:06:35.323 "name": "malloc1" 00:06:35.323 }, 00:06:35.323 "method": "bdev_malloc_create" 00:06:35.323 }, 00:06:35.323 { 00:06:35.323 "method": "bdev_wait_for_examine" 00:06:35.323 } 00:06:35.324 ] 00:06:35.324 } 00:06:35.324 ] 00:06:35.324 } 00:06:35.582 [2024-11-06 15:05:04.723319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.582 [2024-11-06 15:05:04.773728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.959  [2024-11-06T15:05:07.170Z] Copying: 246/512 [MB] (246 MBps) [2024-11-06T15:05:07.170Z] Copying: 492/512 [MB] (245 MBps) [2024-11-06T15:05:07.739Z] Copying: 512/512 [MB] (average 245 MBps) 00:06:38.464 00:06:38.464 ************************************ 00:06:38.464 END TEST dd_malloc_copy 00:06:38.464 ************************************ 00:06:38.464 00:06:38.464 real 0m5.799s 00:06:38.464 user 0m5.152s 00:06:38.464 sys 0m0.501s 00:06:38.464 15:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.464 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.464 ************************************ 00:06:38.464 END TEST spdk_dd_malloc 00:06:38.464 ************************************ 00:06:38.464 00:06:38.464 real 0m6.033s 00:06:38.464 user 0m5.288s 00:06:38.464 sys 0m0.603s 00:06:38.464 15:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.464 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.464 15:05:07 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:06:38.464 15:05:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:38.464 15:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.464 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.464 ************************************ 00:06:38.464 START TEST spdk_dd_bdev_to_bdev 00:06:38.464 ************************************ 00:06:38.464 15:05:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:06:38.464 * Looking for test storage... 00:06:38.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:38.464 15:05:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:38.464 15:05:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:38.464 15:05:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:38.464 15:05:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:38.464 15:05:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:38.464 15:05:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:38.464 15:05:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:38.464 15:05:07 -- scripts/common.sh@335 -- # IFS=.-: 00:06:38.464 15:05:07 -- scripts/common.sh@335 -- # read -ra ver1 00:06:38.464 15:05:07 -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.464 15:05:07 -- scripts/common.sh@336 -- # read -ra ver2 00:06:38.464 15:05:07 -- scripts/common.sh@337 -- # local 'op=<' 00:06:38.464 15:05:07 -- scripts/common.sh@339 -- # ver1_l=2 00:06:38.464 15:05:07 -- scripts/common.sh@340 -- # ver2_l=1 00:06:38.464 15:05:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:38.464 15:05:07 -- scripts/common.sh@343 -- # case "$op" in 00:06:38.464 15:05:07 -- scripts/common.sh@344 -- # : 1 00:06:38.464 15:05:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:38.464 15:05:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.464 15:05:07 -- scripts/common.sh@364 -- # decimal 1 00:06:38.464 15:05:07 -- scripts/common.sh@352 -- # local d=1 00:06:38.464 15:05:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.464 15:05:07 -- scripts/common.sh@354 -- # echo 1 00:06:38.464 15:05:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:38.464 15:05:07 -- scripts/common.sh@365 -- # decimal 2 00:06:38.464 15:05:07 -- scripts/common.sh@352 -- # local d=2 00:06:38.464 15:05:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.464 15:05:07 -- scripts/common.sh@354 -- # echo 2 00:06:38.464 15:05:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:38.464 15:05:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:38.464 15:05:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:38.464 15:05:07 -- scripts/common.sh@367 -- # return 0 00:06:38.464 15:05:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.464 15:05:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.464 --rc genhtml_branch_coverage=1 00:06:38.464 --rc genhtml_function_coverage=1 00:06:38.464 --rc genhtml_legend=1 00:06:38.464 --rc geninfo_all_blocks=1 00:06:38.464 --rc geninfo_unexecuted_blocks=1 00:06:38.464 00:06:38.464 ' 00:06:38.464 15:05:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.464 --rc genhtml_branch_coverage=1 00:06:38.464 --rc genhtml_function_coverage=1 00:06:38.464 --rc genhtml_legend=1 00:06:38.464 --rc geninfo_all_blocks=1 00:06:38.464 --rc geninfo_unexecuted_blocks=1 00:06:38.464 00:06:38.464 ' 00:06:38.464 15:05:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.464 --rc genhtml_branch_coverage=1 00:06:38.464 --rc genhtml_function_coverage=1 00:06:38.464 --rc genhtml_legend=1 00:06:38.464 --rc geninfo_all_blocks=1 00:06:38.464 --rc geninfo_unexecuted_blocks=1 00:06:38.464 00:06:38.464 ' 00:06:38.464 15:05:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.464 --rc genhtml_branch_coverage=1 00:06:38.464 --rc genhtml_function_coverage=1 00:06:38.464 --rc genhtml_legend=1 00:06:38.464 --rc geninfo_all_blocks=1 00:06:38.464 --rc geninfo_unexecuted_blocks=1 00:06:38.464 00:06:38.464 ' 00:06:38.464 15:05:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.464 15:05:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.464 15:05:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.464 15:05:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.465 15:05:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.465 15:05:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.465 15:05:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.465 15:05:07 -- paths/export.sh@5 -- # export PATH 00:06:38.465 15:05:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:38.465 15:05:07 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:38.465 15:05:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.465 15:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.465 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.465 ************************************ 00:06:38.465 START TEST dd_inflate_file 00:06:38.465 ************************************ 00:06:38.465 15:05:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:38.724 [2024-11-06 15:05:07.784563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.724 [2024-11-06 15:05:07.784861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58807 ] 00:06:38.724 [2024-11-06 15:05:07.920075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.724 [2024-11-06 15:05:07.977100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.983  [2024-11-06T15:05:08.258Z] Copying: 64/64 [MB] (average 2064 MBps) 00:06:38.983 00:06:38.983 00:06:38.983 real 0m0.504s 00:06:38.983 user 0m0.251s 00:06:38.983 sys 0m0.127s 00:06:38.983 15:05:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.983 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 END TEST dd_inflate_file 00:06:38.983 ************************************ 00:06:39.243 15:05:08 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:39.243 15:05:08 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:39.243 15:05:08 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:39.243 15:05:08 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:39.243 15:05:08 -- dd/common.sh@31 -- # xtrace_disable 00:06:39.243 15:05:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:39.243 15:05:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.243 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.243 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.243 ************************************ 00:06:39.243 START TEST dd_copy_to_out_bdev 00:06:39.243 ************************************ 00:06:39.243 15:05:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:39.243 [2024-11-06 15:05:08.345040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.243 [2024-11-06 15:05:08.345124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:06:39.243 { 00:06:39.243 "subsystems": [ 00:06:39.243 { 00:06:39.243 "subsystem": "bdev", 00:06:39.243 "config": [ 00:06:39.243 { 00:06:39.243 "params": { 00:06:39.243 "trtype": "pcie", 00:06:39.243 "traddr": "0000:00:06.0", 00:06:39.243 "name": "Nvme0" 00:06:39.243 }, 00:06:39.243 "method": "bdev_nvme_attach_controller" 00:06:39.243 }, 00:06:39.243 { 00:06:39.243 "params": { 00:06:39.243 "trtype": "pcie", 00:06:39.243 "traddr": "0000:00:07.0", 00:06:39.243 "name": "Nvme1" 00:06:39.243 }, 00:06:39.243 "method": "bdev_nvme_attach_controller" 00:06:39.243 }, 00:06:39.243 { 00:06:39.243 "method": "bdev_wait_for_examine" 00:06:39.243 } 00:06:39.243 ] 00:06:39.243 } 00:06:39.243 ] 00:06:39.243 } 00:06:39.243 [2024-11-06 15:05:08.480382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.503 [2024-11-06 15:05:08.531060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.439  [2024-11-06T15:05:10.282Z] Copying: 47/64 [MB] (47 MBps) [2024-11-06T15:05:10.282Z] Copying: 64/64 [MB] (average 47 MBps) 00:06:41.007 00:06:41.007 00:06:41.007 real 0m1.944s 00:06:41.007 user 0m1.724s 00:06:41.007 sys 0m0.150s 00:06:41.007 15:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.007 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.007 ************************************ 00:06:41.007 END TEST dd_copy_to_out_bdev 00:06:41.007 ************************************ 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:41.267 15:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.267 15:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.267 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.267 ************************************ 00:06:41.267 START TEST dd_offset_magic 00:06:41.267 ************************************ 00:06:41.267 15:05:10 -- common/autotest_common.sh@1114 -- # offset_magic 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:41.267 15:05:10 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:41.267 15:05:10 -- dd/common.sh@31 -- # xtrace_disable 00:06:41.267 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.267 [2024-11-06 15:05:10.340540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.267 [2024-11-06 15:05:10.340796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:06:41.267 { 00:06:41.267 "subsystems": [ 00:06:41.267 { 00:06:41.267 "subsystem": "bdev", 00:06:41.267 "config": [ 00:06:41.267 { 00:06:41.267 "params": { 00:06:41.267 "trtype": "pcie", 00:06:41.267 "traddr": "0000:00:06.0", 00:06:41.267 "name": "Nvme0" 00:06:41.267 }, 00:06:41.267 "method": "bdev_nvme_attach_controller" 00:06:41.267 }, 00:06:41.267 { 00:06:41.267 "params": { 00:06:41.267 "trtype": "pcie", 00:06:41.267 "traddr": "0000:00:07.0", 00:06:41.267 "name": "Nvme1" 00:06:41.267 }, 00:06:41.267 "method": "bdev_nvme_attach_controller" 00:06:41.267 }, 00:06:41.267 { 00:06:41.267 "method": "bdev_wait_for_examine" 00:06:41.267 } 00:06:41.267 ] 00:06:41.267 } 00:06:41.267 ] 00:06:41.267 } 00:06:41.267 [2024-11-06 15:05:10.470403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.267 [2024-11-06 15:05:10.520474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.526  [2024-11-06T15:05:11.060Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:41.785 00:06:41.785 15:05:10 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:41.785 15:05:10 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:41.785 15:05:10 -- dd/common.sh@31 -- # xtrace_disable 00:06:41.785 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.785 [2024-11-06 15:05:11.011752] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.785 [2024-11-06 15:05:11.012285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58897 ] 00:06:41.785 { 00:06:41.785 "subsystems": [ 00:06:41.785 { 00:06:41.785 "subsystem": "bdev", 00:06:41.785 "config": [ 00:06:41.785 { 00:06:41.785 "params": { 00:06:41.785 "trtype": "pcie", 00:06:41.785 "traddr": "0000:00:06.0", 00:06:41.785 "name": "Nvme0" 00:06:41.785 }, 00:06:41.785 "method": "bdev_nvme_attach_controller" 00:06:41.785 }, 00:06:41.785 { 00:06:41.785 "params": { 00:06:41.785 "trtype": "pcie", 00:06:41.785 "traddr": "0000:00:07.0", 00:06:41.785 "name": "Nvme1" 00:06:41.785 }, 00:06:41.785 "method": "bdev_nvme_attach_controller" 00:06:41.785 }, 00:06:41.785 { 00:06:41.785 "method": "bdev_wait_for_examine" 00:06:41.785 } 00:06:41.785 ] 00:06:41.785 } 00:06:41.785 ] 00:06:41.785 } 00:06:42.045 [2024-11-06 15:05:11.148072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.045 [2024-11-06 15:05:11.202837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.304  [2024-11-06T15:05:11.579Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:42.304 00:06:42.304 15:05:11 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:42.304 15:05:11 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:42.304 15:05:11 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:42.304 15:05:11 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:42.304 15:05:11 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:42.304 15:05:11 -- dd/common.sh@31 -- # xtrace_disable 00:06:42.304 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 [2024-11-06 15:05:11.611184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.563 [2024-11-06 15:05:11.611543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58917 ] 00:06:42.563 { 00:06:42.563 "subsystems": [ 00:06:42.563 { 00:06:42.563 "subsystem": "bdev", 00:06:42.563 "config": [ 00:06:42.563 { 00:06:42.563 "params": { 00:06:42.563 "trtype": "pcie", 00:06:42.563 "traddr": "0000:00:06.0", 00:06:42.564 "name": "Nvme0" 00:06:42.564 }, 00:06:42.564 "method": "bdev_nvme_attach_controller" 00:06:42.564 }, 00:06:42.564 { 00:06:42.564 "params": { 00:06:42.564 "trtype": "pcie", 00:06:42.564 "traddr": "0000:00:07.0", 00:06:42.564 "name": "Nvme1" 00:06:42.564 }, 00:06:42.564 "method": "bdev_nvme_attach_controller" 00:06:42.564 }, 00:06:42.564 { 00:06:42.564 "method": "bdev_wait_for_examine" 00:06:42.564 } 00:06:42.564 ] 00:06:42.564 } 00:06:42.564 ] 00:06:42.564 } 00:06:42.564 [2024-11-06 15:05:11.752151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.564 [2024-11-06 15:05:11.800865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.823  [2024-11-06T15:05:12.356Z] Copying: 65/65 [MB] (average 1048 MBps) 00:06:43.081 00:06:43.081 15:05:12 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:43.081 15:05:12 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:43.081 15:05:12 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.081 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:43.081 [2024-11-06 15:05:12.302884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.082 [2024-11-06 15:05:12.302965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:06:43.082 { 00:06:43.082 "subsystems": [ 00:06:43.082 { 00:06:43.082 "subsystem": "bdev", 00:06:43.082 "config": [ 00:06:43.082 { 00:06:43.082 "params": { 00:06:43.082 "trtype": "pcie", 00:06:43.082 "traddr": "0000:00:06.0", 00:06:43.082 "name": "Nvme0" 00:06:43.082 }, 00:06:43.082 "method": "bdev_nvme_attach_controller" 00:06:43.082 }, 00:06:43.082 { 00:06:43.082 "params": { 00:06:43.082 "trtype": "pcie", 00:06:43.082 "traddr": "0000:00:07.0", 00:06:43.082 "name": "Nvme1" 00:06:43.082 }, 00:06:43.082 "method": "bdev_nvme_attach_controller" 00:06:43.082 }, 00:06:43.082 { 00:06:43.082 "method": "bdev_wait_for_examine" 00:06:43.082 } 00:06:43.082 ] 00:06:43.082 } 00:06:43.082 ] 00:06:43.082 } 00:06:43.341 [2024-11-06 15:05:12.439392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.341 [2024-11-06 15:05:12.487912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.599  [2024-11-06T15:05:12.874Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:43.599 00:06:43.858 15:05:12 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:43.858 15:05:12 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:43.858 00:06:43.858 real 0m2.577s 00:06:43.858 user 0m1.931s 00:06:43.858 sys 0m0.446s 00:06:43.858 ************************************ 00:06:43.858 15:05:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.858 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:43.858 END TEST dd_offset_magic 00:06:43.858 ************************************ 00:06:43.858 15:05:12 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:43.858 15:05:12 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:43.858 15:05:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.858 15:05:12 -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.858 15:05:12 -- dd/common.sh@12 -- # local size=4194330 00:06:43.858 15:05:12 -- dd/common.sh@14 -- # local bs=1048576 00:06:43.858 15:05:12 -- dd/common.sh@15 -- # local count=5 00:06:43.858 15:05:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:43.858 15:05:12 -- dd/common.sh@18 -- # gen_conf 00:06:43.858 15:05:12 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.858 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:43.858 [2024-11-06 15:05:12.971956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.858 [2024-11-06 15:05:12.972056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:06:43.858 { 00:06:43.858 "subsystems": [ 00:06:43.858 { 00:06:43.858 "subsystem": "bdev", 00:06:43.858 "config": [ 00:06:43.858 { 00:06:43.858 "params": { 00:06:43.858 "trtype": "pcie", 00:06:43.858 "traddr": "0000:00:06.0", 00:06:43.858 "name": "Nvme0" 00:06:43.858 }, 00:06:43.858 "method": "bdev_nvme_attach_controller" 00:06:43.858 }, 00:06:43.858 { 00:06:43.858 "params": { 00:06:43.858 "trtype": "pcie", 00:06:43.858 "traddr": "0000:00:07.0", 00:06:43.858 "name": "Nvme1" 00:06:43.858 }, 00:06:43.858 "method": "bdev_nvme_attach_controller" 00:06:43.858 }, 00:06:43.858 { 00:06:43.858 "method": "bdev_wait_for_examine" 00:06:43.858 } 00:06:43.858 ] 00:06:43.858 } 00:06:43.858 ] 00:06:43.858 } 00:06:43.858 [2024-11-06 15:05:13.109420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.117 [2024-11-06 15:05:13.177391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.117  [2024-11-06T15:05:13.650Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:06:44.375 00:06:44.375 15:05:13 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:44.375 15:05:13 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:44.375 15:05:13 -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.375 15:05:13 -- dd/common.sh@12 -- # local size=4194330 00:06:44.375 15:05:13 -- dd/common.sh@14 -- # local bs=1048576 00:06:44.375 15:05:13 -- dd/common.sh@15 -- # local count=5 00:06:44.375 15:05:13 -- dd/common.sh@18 -- # gen_conf 00:06:44.375 15:05:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:44.375 15:05:13 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.375 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:44.375 [2024-11-06 15:05:13.600399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.375 [2024-11-06 15:05:13.600492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:06:44.375 { 00:06:44.375 "subsystems": [ 00:06:44.375 { 00:06:44.375 "subsystem": "bdev", 00:06:44.375 "config": [ 00:06:44.375 { 00:06:44.375 "params": { 00:06:44.375 "trtype": "pcie", 00:06:44.375 "traddr": "0000:00:06.0", 00:06:44.375 "name": "Nvme0" 00:06:44.375 }, 00:06:44.375 "method": "bdev_nvme_attach_controller" 00:06:44.375 }, 00:06:44.375 { 00:06:44.375 "params": { 00:06:44.375 "trtype": "pcie", 00:06:44.375 "traddr": "0000:00:07.0", 00:06:44.375 "name": "Nvme1" 00:06:44.375 }, 00:06:44.375 "method": "bdev_nvme_attach_controller" 00:06:44.375 }, 00:06:44.375 { 00:06:44.375 "method": "bdev_wait_for_examine" 00:06:44.375 } 00:06:44.375 ] 00:06:44.375 } 00:06:44.375 ] 00:06:44.375 } 00:06:44.634 [2024-11-06 15:05:13.739546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.634 [2024-11-06 15:05:13.807632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.893  [2024-11-06T15:05:14.449Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:45.174 00:06:45.174 15:05:14 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:45.174 00:06:45.174 real 0m6.655s 00:06:45.174 user 0m4.992s 00:06:45.174 sys 0m1.151s 00:06:45.174 15:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.174 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:45.174 ************************************ 00:06:45.174 END TEST spdk_dd_bdev_to_bdev 00:06:45.174 ************************************ 00:06:45.174 15:05:14 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:45.174 15:05:14 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.174 15:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.174 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:45.174 ************************************ 00:06:45.174 START TEST spdk_dd_uring 00:06:45.174 ************************************ 00:06:45.174 15:05:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.174 * Looking for test storage... 00:06:45.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.174 15:05:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.174 15:05:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.174 15:05:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.174 15:05:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.174 15:05:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.174 15:05:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.174 15:05:14 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.174 15:05:14 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.174 15:05:14 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.174 15:05:14 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.174 15:05:14 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.174 15:05:14 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.174 15:05:14 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.174 15:05:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.174 15:05:14 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.174 15:05:14 -- scripts/common.sh@344 -- # : 1 00:06:45.174 15:05:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.174 15:05:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.174 15:05:14 -- scripts/common.sh@364 -- # decimal 1 00:06:45.174 15:05:14 -- scripts/common.sh@352 -- # local d=1 00:06:45.174 15:05:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.174 15:05:14 -- scripts/common.sh@354 -- # echo 1 00:06:45.174 15:05:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.174 15:05:14 -- scripts/common.sh@365 -- # decimal 2 00:06:45.174 15:05:14 -- scripts/common.sh@352 -- # local d=2 00:06:45.174 15:05:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.174 15:05:14 -- scripts/common.sh@354 -- # echo 2 00:06:45.174 15:05:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.174 15:05:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.174 15:05:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.174 15:05:14 -- scripts/common.sh@367 -- # return 0 00:06:45.174 15:05:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.174 --rc genhtml_branch_coverage=1 00:06:45.174 --rc genhtml_function_coverage=1 00:06:45.174 --rc genhtml_legend=1 00:06:45.174 --rc geninfo_all_blocks=1 00:06:45.174 --rc geninfo_unexecuted_blocks=1 00:06:45.174 00:06:45.174 ' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.174 --rc genhtml_branch_coverage=1 00:06:45.174 --rc genhtml_function_coverage=1 00:06:45.174 --rc genhtml_legend=1 00:06:45.174 --rc geninfo_all_blocks=1 00:06:45.174 --rc geninfo_unexecuted_blocks=1 00:06:45.174 00:06:45.174 ' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.174 --rc genhtml_branch_coverage=1 00:06:45.174 --rc genhtml_function_coverage=1 00:06:45.174 --rc genhtml_legend=1 00:06:45.174 --rc geninfo_all_blocks=1 00:06:45.174 --rc geninfo_unexecuted_blocks=1 00:06:45.174 00:06:45.174 ' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.174 --rc genhtml_branch_coverage=1 00:06:45.174 --rc genhtml_function_coverage=1 00:06:45.174 --rc genhtml_legend=1 00:06:45.174 --rc geninfo_all_blocks=1 00:06:45.174 --rc geninfo_unexecuted_blocks=1 00:06:45.174 00:06:45.174 ' 00:06:45.174 15:05:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.174 15:05:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.174 15:05:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.174 15:05:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.174 15:05:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.174 15:05:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.174 15:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.174 15:05:14 -- paths/export.sh@5 -- # export PATH 00:06:45.174 15:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.174 15:05:14 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:45.174 15:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.174 15:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.174 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:45.174 ************************************ 00:06:45.174 START TEST dd_uring_copy 00:06:45.174 ************************************ 00:06:45.174 15:05:14 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:06:45.174 15:05:14 -- dd/uring.sh@15 -- # local zram_dev_id 00:06:45.174 15:05:14 -- dd/uring.sh@16 -- # local magic 00:06:45.174 15:05:14 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:45.174 15:05:14 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:45.174 15:05:14 -- dd/uring.sh@19 -- # local verify_magic 00:06:45.174 15:05:14 -- dd/uring.sh@21 -- # init_zram 00:06:45.174 15:05:14 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:45.175 15:05:14 -- dd/common.sh@164 -- # return 00:06:45.175 15:05:14 -- dd/uring.sh@22 -- # create_zram_dev 00:06:45.175 15:05:14 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:45.447 15:05:14 -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:45.447 15:05:14 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:45.447 15:05:14 -- dd/common.sh@181 -- # local id=1 00:06:45.447 15:05:14 -- dd/common.sh@182 -- # local size=512M 00:06:45.447 15:05:14 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:45.447 15:05:14 -- dd/common.sh@186 -- # echo 512M 00:06:45.447 15:05:14 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:45.447 15:05:14 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:45.447 15:05:14 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:45.447 15:05:14 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:45.447 15:05:14 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:45.447 15:05:14 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:45.447 15:05:14 -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:45.447 15:05:14 -- dd/common.sh@98 -- # xtrace_disable 00:06:45.447 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:45.447 15:05:14 -- dd/uring.sh@41 -- # magic=zhy6x1nribvttem4xl8ecmb4y2mvgxwy5kr1bopih27rldvj8avcujzepola8zutucs7lqd5t8z8vcsvwxixtae5a9gzow38g9c3sznkuipyv39z8roze2newz2cg0z9rd36mi18q7a3qotz24ck9uhg054msih4ed9fglqfcms3jonpfmtsrv4v2nnigivlz4nyu8hlyu4ki3tmvn29rzdafl4w77ww5f6bbmj4qbjjx34f4ftvu283av42solfck29ag9z6rcjbhv7kla5pw2x4kowy8hrxm1374hjh2qph6n8010y1n89qiyulwtnvdum1yvr1foglqhwjv5un9nphzjqzb3t7f8b14t4hf98nj4d04fobmdyza3gr91oj7dc1v33xikzx7g1gjs12zdj4le2mqoli1yf0sjlfigmo8mpobiw663ghd5zg1kfhkqs0974hpfemrpx3tw63en5n9fwqxiu8uhxno7g617k3t3fogrp8zis9is1ci3jgqzptli3ru5kwz8jg0x0688ktibfd9pafa0ol2qucogmwursrtaxr03h0jcd6dbt9gy1j68gppj9mfzkruz8buegckz431tqwxjln65v183lexj7xsyj11ueht6nbt4e4wufzxeb2v2adoqf4gn2ti8zmcodb5ffbl0u9v94ajk64xo7evr1tqwszxok3vjqnho5zcevulvog4cuf2jv4pgelwim5ricrozigcvvomm21c28uskbxx1rjk7nqsot9zgo6r3tac6jrd4cdxpxld3tgn07q42pssv98mx4j8b7zejmrq9sq72d2rs2lqsh985z4bggl0dfy4vwcc5u10p6oavbz8y31iz96x1ojvycntryj6s0d6ngyhw2oet39v0hcwt0zt3sashhazy2bwanzd1vgp2f73rpzaegn3xwvhv2pt1q9wwn37hpt6qcwo9oq54x0rm76b099vj7mnxrgdi5x0a5zh9rwfdecfs15wnxpjeef89a2ir4yf3i 00:06:45.447 15:05:14 -- dd/uring.sh@42 -- # echo zhy6x1nribvttem4xl8ecmb4y2mvgxwy5kr1bopih27rldvj8avcujzepola8zutucs7lqd5t8z8vcsvwxixtae5a9gzow38g9c3sznkuipyv39z8roze2newz2cg0z9rd36mi18q7a3qotz24ck9uhg054msih4ed9fglqfcms3jonpfmtsrv4v2nnigivlz4nyu8hlyu4ki3tmvn29rzdafl4w77ww5f6bbmj4qbjjx34f4ftvu283av42solfck29ag9z6rcjbhv7kla5pw2x4kowy8hrxm1374hjh2qph6n8010y1n89qiyulwtnvdum1yvr1foglqhwjv5un9nphzjqzb3t7f8b14t4hf98nj4d04fobmdyza3gr91oj7dc1v33xikzx7g1gjs12zdj4le2mqoli1yf0sjlfigmo8mpobiw663ghd5zg1kfhkqs0974hpfemrpx3tw63en5n9fwqxiu8uhxno7g617k3t3fogrp8zis9is1ci3jgqzptli3ru5kwz8jg0x0688ktibfd9pafa0ol2qucogmwursrtaxr03h0jcd6dbt9gy1j68gppj9mfzkruz8buegckz431tqwxjln65v183lexj7xsyj11ueht6nbt4e4wufzxeb2v2adoqf4gn2ti8zmcodb5ffbl0u9v94ajk64xo7evr1tqwszxok3vjqnho5zcevulvog4cuf2jv4pgelwim5ricrozigcvvomm21c28uskbxx1rjk7nqsot9zgo6r3tac6jrd4cdxpxld3tgn07q42pssv98mx4j8b7zejmrq9sq72d2rs2lqsh985z4bggl0dfy4vwcc5u10p6oavbz8y31iz96x1ojvycntryj6s0d6ngyhw2oet39v0hcwt0zt3sashhazy2bwanzd1vgp2f73rpzaegn3xwvhv2pt1q9wwn37hpt6qcwo9oq54x0rm76b099vj7mnxrgdi5x0a5zh9rwfdecfs15wnxpjeef89a2ir4yf3i 00:06:45.447 15:05:14 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:45.447 [2024-11-06 15:05:14.504515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.447 [2024-11-06 15:05:14.504633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:06:45.447 [2024-11-06 15:05:14.640576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.447 [2024-11-06 15:05:14.687916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.030  [2024-11-06T15:05:15.564Z] Copying: 511/511 [MB] (average 1848 MBps) 00:06:46.289 00:06:46.289 15:05:15 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:46.289 15:05:15 -- dd/uring.sh@54 -- # gen_conf 00:06:46.289 15:05:15 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.289 15:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:46.289 [2024-11-06 15:05:15.408831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.289 [2024-11-06 15:05:15.408932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:06:46.289 { 00:06:46.289 "subsystems": [ 00:06:46.289 { 00:06:46.289 "subsystem": "bdev", 00:06:46.289 "config": [ 00:06:46.289 { 00:06:46.289 "params": { 00:06:46.289 "block_size": 512, 00:06:46.289 "num_blocks": 1048576, 00:06:46.289 "name": "malloc0" 00:06:46.289 }, 00:06:46.289 "method": "bdev_malloc_create" 00:06:46.289 }, 00:06:46.289 { 00:06:46.289 "params": { 00:06:46.289 "filename": "/dev/zram1", 00:06:46.289 "name": "uring0" 00:06:46.289 }, 00:06:46.289 "method": "bdev_uring_create" 00:06:46.289 }, 00:06:46.289 { 00:06:46.289 "method": "bdev_wait_for_examine" 00:06:46.289 } 00:06:46.289 ] 00:06:46.289 } 00:06:46.289 ] 00:06:46.289 } 00:06:46.289 [2024-11-06 15:05:15.543267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.548 [2024-11-06 15:05:15.594092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.484  [2024-11-06T15:05:18.137Z] Copying: 240/512 [MB] (240 MBps) [2024-11-06T15:05:18.137Z] Copying: 485/512 [MB] (244 MBps) [2024-11-06T15:05:18.137Z] Copying: 512/512 [MB] (average 243 MBps) 00:06:48.862 00:06:48.862 15:05:18 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:48.862 15:05:18 -- dd/uring.sh@60 -- # gen_conf 00:06:48.862 15:05:18 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.862 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:49.121 [2024-11-06 15:05:18.156082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.121 [2024-11-06 15:05:18.156196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:06:49.121 { 00:06:49.121 "subsystems": [ 00:06:49.121 { 00:06:49.121 "subsystem": "bdev", 00:06:49.121 "config": [ 00:06:49.121 { 00:06:49.121 "params": { 00:06:49.121 "block_size": 512, 00:06:49.121 "num_blocks": 1048576, 00:06:49.121 "name": "malloc0" 00:06:49.121 }, 00:06:49.121 "method": "bdev_malloc_create" 00:06:49.121 }, 00:06:49.121 { 00:06:49.121 "params": { 00:06:49.121 "filename": "/dev/zram1", 00:06:49.121 "name": "uring0" 00:06:49.121 }, 00:06:49.121 "method": "bdev_uring_create" 00:06:49.121 }, 00:06:49.121 { 00:06:49.121 "method": "bdev_wait_for_examine" 00:06:49.121 } 00:06:49.121 ] 00:06:49.121 } 00:06:49.121 ] 00:06:49.121 } 00:06:49.121 [2024-11-06 15:05:18.295633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.121 [2024-11-06 15:05:18.342858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.498  [2024-11-06T15:05:20.709Z] Copying: 137/512 [MB] (137 MBps) [2024-11-06T15:05:21.646Z] Copying: 281/512 [MB] (143 MBps) [2024-11-06T15:05:22.214Z] Copying: 431/512 [MB] (150 MBps) [2024-11-06T15:05:22.474Z] Copying: 512/512 [MB] (average 146 MBps) 00:06:53.199 00:06:53.199 15:05:22 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:53.199 15:05:22 -- dd/uring.sh@66 -- # [[ zhy6x1nribvttem4xl8ecmb4y2mvgxwy5kr1bopih27rldvj8avcujzepola8zutucs7lqd5t8z8vcsvwxixtae5a9gzow38g9c3sznkuipyv39z8roze2newz2cg0z9rd36mi18q7a3qotz24ck9uhg054msih4ed9fglqfcms3jonpfmtsrv4v2nnigivlz4nyu8hlyu4ki3tmvn29rzdafl4w77ww5f6bbmj4qbjjx34f4ftvu283av42solfck29ag9z6rcjbhv7kla5pw2x4kowy8hrxm1374hjh2qph6n8010y1n89qiyulwtnvdum1yvr1foglqhwjv5un9nphzjqzb3t7f8b14t4hf98nj4d04fobmdyza3gr91oj7dc1v33xikzx7g1gjs12zdj4le2mqoli1yf0sjlfigmo8mpobiw663ghd5zg1kfhkqs0974hpfemrpx3tw63en5n9fwqxiu8uhxno7g617k3t3fogrp8zis9is1ci3jgqzptli3ru5kwz8jg0x0688ktibfd9pafa0ol2qucogmwursrtaxr03h0jcd6dbt9gy1j68gppj9mfzkruz8buegckz431tqwxjln65v183lexj7xsyj11ueht6nbt4e4wufzxeb2v2adoqf4gn2ti8zmcodb5ffbl0u9v94ajk64xo7evr1tqwszxok3vjqnho5zcevulvog4cuf2jv4pgelwim5ricrozigcvvomm21c28uskbxx1rjk7nqsot9zgo6r3tac6jrd4cdxpxld3tgn07q42pssv98mx4j8b7zejmrq9sq72d2rs2lqsh985z4bggl0dfy4vwcc5u10p6oavbz8y31iz96x1ojvycntryj6s0d6ngyhw2oet39v0hcwt0zt3sashhazy2bwanzd1vgp2f73rpzaegn3xwvhv2pt1q9wwn37hpt6qcwo9oq54x0rm76b099vj7mnxrgdi5x0a5zh9rwfdecfs15wnxpjeef89a2ir4yf3i == \z\h\y\6\x\1\n\r\i\b\v\t\t\e\m\4\x\l\8\e\c\m\b\4\y\2\m\v\g\x\w\y\5\k\r\1\b\o\p\i\h\2\7\r\l\d\v\j\8\a\v\c\u\j\z\e\p\o\l\a\8\z\u\t\u\c\s\7\l\q\d\5\t\8\z\8\v\c\s\v\w\x\i\x\t\a\e\5\a\9\g\z\o\w\3\8\g\9\c\3\s\z\n\k\u\i\p\y\v\3\9\z\8\r\o\z\e\2\n\e\w\z\2\c\g\0\z\9\r\d\3\6\m\i\1\8\q\7\a\3\q\o\t\z\2\4\c\k\9\u\h\g\0\5\4\m\s\i\h\4\e\d\9\f\g\l\q\f\c\m\s\3\j\o\n\p\f\m\t\s\r\v\4\v\2\n\n\i\g\i\v\l\z\4\n\y\u\8\h\l\y\u\4\k\i\3\t\m\v\n\2\9\r\z\d\a\f\l\4\w\7\7\w\w\5\f\6\b\b\m\j\4\q\b\j\j\x\3\4\f\4\f\t\v\u\2\8\3\a\v\4\2\s\o\l\f\c\k\2\9\a\g\9\z\6\r\c\j\b\h\v\7\k\l\a\5\p\w\2\x\4\k\o\w\y\8\h\r\x\m\1\3\7\4\h\j\h\2\q\p\h\6\n\8\0\1\0\y\1\n\8\9\q\i\y\u\l\w\t\n\v\d\u\m\1\y\v\r\1\f\o\g\l\q\h\w\j\v\5\u\n\9\n\p\h\z\j\q\z\b\3\t\7\f\8\b\1\4\t\4\h\f\9\8\n\j\4\d\0\4\f\o\b\m\d\y\z\a\3\g\r\9\1\o\j\7\d\c\1\v\3\3\x\i\k\z\x\7\g\1\g\j\s\1\2\z\d\j\4\l\e\2\m\q\o\l\i\1\y\f\0\s\j\l\f\i\g\m\o\8\m\p\o\b\i\w\6\6\3\g\h\d\5\z\g\1\k\f\h\k\q\s\0\9\7\4\h\p\f\e\m\r\p\x\3\t\w\6\3\e\n\5\n\9\f\w\q\x\i\u\8\u\h\x\n\o\7\g\6\1\7\k\3\t\3\f\o\g\r\p\8\z\i\s\9\i\s\1\c\i\3\j\g\q\z\p\t\l\i\3\r\u\5\k\w\z\8\j\g\0\x\0\6\8\8\k\t\i\b\f\d\9\p\a\f\a\0\o\l\2\q\u\c\o\g\m\w\u\r\s\r\t\a\x\r\0\3\h\0\j\c\d\6\d\b\t\9\g\y\1\j\6\8\g\p\p\j\9\m\f\z\k\r\u\z\8\b\u\e\g\c\k\z\4\3\1\t\q\w\x\j\l\n\6\5\v\1\8\3\l\e\x\j\7\x\s\y\j\1\1\u\e\h\t\6\n\b\t\4\e\4\w\u\f\z\x\e\b\2\v\2\a\d\o\q\f\4\g\n\2\t\i\8\z\m\c\o\d\b\5\f\f\b\l\0\u\9\v\9\4\a\j\k\6\4\x\o\7\e\v\r\1\t\q\w\s\z\x\o\k\3\v\j\q\n\h\o\5\z\c\e\v\u\l\v\o\g\4\c\u\f\2\j\v\4\p\g\e\l\w\i\m\5\r\i\c\r\o\z\i\g\c\v\v\o\m\m\2\1\c\2\8\u\s\k\b\x\x\1\r\j\k\7\n\q\s\o\t\9\z\g\o\6\r\3\t\a\c\6\j\r\d\4\c\d\x\p\x\l\d\3\t\g\n\0\7\q\4\2\p\s\s\v\9\8\m\x\4\j\8\b\7\z\e\j\m\r\q\9\s\q\7\2\d\2\r\s\2\l\q\s\h\9\8\5\z\4\b\g\g\l\0\d\f\y\4\v\w\c\c\5\u\1\0\p\6\o\a\v\b\z\8\y\3\1\i\z\9\6\x\1\o\j\v\y\c\n\t\r\y\j\6\s\0\d\6\n\g\y\h\w\2\o\e\t\3\9\v\0\h\c\w\t\0\z\t\3\s\a\s\h\h\a\z\y\2\b\w\a\n\z\d\1\v\g\p\2\f\7\3\r\p\z\a\e\g\n\3\x\w\v\h\v\2\p\t\1\q\9\w\w\n\3\7\h\p\t\6\q\c\w\o\9\o\q\5\4\x\0\r\m\7\6\b\0\9\9\v\j\7\m\n\x\r\g\d\i\5\x\0\a\5\z\h\9\r\w\f\d\e\c\f\s\1\5\w\n\x\p\j\e\e\f\8\9\a\2\i\r\4\y\f\3\i ]] 00:06:53.199 15:05:22 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:53.199 15:05:22 -- dd/uring.sh@69 -- # [[ zhy6x1nribvttem4xl8ecmb4y2mvgxwy5kr1bopih27rldvj8avcujzepola8zutucs7lqd5t8z8vcsvwxixtae5a9gzow38g9c3sznkuipyv39z8roze2newz2cg0z9rd36mi18q7a3qotz24ck9uhg054msih4ed9fglqfcms3jonpfmtsrv4v2nnigivlz4nyu8hlyu4ki3tmvn29rzdafl4w77ww5f6bbmj4qbjjx34f4ftvu283av42solfck29ag9z6rcjbhv7kla5pw2x4kowy8hrxm1374hjh2qph6n8010y1n89qiyulwtnvdum1yvr1foglqhwjv5un9nphzjqzb3t7f8b14t4hf98nj4d04fobmdyza3gr91oj7dc1v33xikzx7g1gjs12zdj4le2mqoli1yf0sjlfigmo8mpobiw663ghd5zg1kfhkqs0974hpfemrpx3tw63en5n9fwqxiu8uhxno7g617k3t3fogrp8zis9is1ci3jgqzptli3ru5kwz8jg0x0688ktibfd9pafa0ol2qucogmwursrtaxr03h0jcd6dbt9gy1j68gppj9mfzkruz8buegckz431tqwxjln65v183lexj7xsyj11ueht6nbt4e4wufzxeb2v2adoqf4gn2ti8zmcodb5ffbl0u9v94ajk64xo7evr1tqwszxok3vjqnho5zcevulvog4cuf2jv4pgelwim5ricrozigcvvomm21c28uskbxx1rjk7nqsot9zgo6r3tac6jrd4cdxpxld3tgn07q42pssv98mx4j8b7zejmrq9sq72d2rs2lqsh985z4bggl0dfy4vwcc5u10p6oavbz8y31iz96x1ojvycntryj6s0d6ngyhw2oet39v0hcwt0zt3sashhazy2bwanzd1vgp2f73rpzaegn3xwvhv2pt1q9wwn37hpt6qcwo9oq54x0rm76b099vj7mnxrgdi5x0a5zh9rwfdecfs15wnxpjeef89a2ir4yf3i == \z\h\y\6\x\1\n\r\i\b\v\t\t\e\m\4\x\l\8\e\c\m\b\4\y\2\m\v\g\x\w\y\5\k\r\1\b\o\p\i\h\2\7\r\l\d\v\j\8\a\v\c\u\j\z\e\p\o\l\a\8\z\u\t\u\c\s\7\l\q\d\5\t\8\z\8\v\c\s\v\w\x\i\x\t\a\e\5\a\9\g\z\o\w\3\8\g\9\c\3\s\z\n\k\u\i\p\y\v\3\9\z\8\r\o\z\e\2\n\e\w\z\2\c\g\0\z\9\r\d\3\6\m\i\1\8\q\7\a\3\q\o\t\z\2\4\c\k\9\u\h\g\0\5\4\m\s\i\h\4\e\d\9\f\g\l\q\f\c\m\s\3\j\o\n\p\f\m\t\s\r\v\4\v\2\n\n\i\g\i\v\l\z\4\n\y\u\8\h\l\y\u\4\k\i\3\t\m\v\n\2\9\r\z\d\a\f\l\4\w\7\7\w\w\5\f\6\b\b\m\j\4\q\b\j\j\x\3\4\f\4\f\t\v\u\2\8\3\a\v\4\2\s\o\l\f\c\k\2\9\a\g\9\z\6\r\c\j\b\h\v\7\k\l\a\5\p\w\2\x\4\k\o\w\y\8\h\r\x\m\1\3\7\4\h\j\h\2\q\p\h\6\n\8\0\1\0\y\1\n\8\9\q\i\y\u\l\w\t\n\v\d\u\m\1\y\v\r\1\f\o\g\l\q\h\w\j\v\5\u\n\9\n\p\h\z\j\q\z\b\3\t\7\f\8\b\1\4\t\4\h\f\9\8\n\j\4\d\0\4\f\o\b\m\d\y\z\a\3\g\r\9\1\o\j\7\d\c\1\v\3\3\x\i\k\z\x\7\g\1\g\j\s\1\2\z\d\j\4\l\e\2\m\q\o\l\i\1\y\f\0\s\j\l\f\i\g\m\o\8\m\p\o\b\i\w\6\6\3\g\h\d\5\z\g\1\k\f\h\k\q\s\0\9\7\4\h\p\f\e\m\r\p\x\3\t\w\6\3\e\n\5\n\9\f\w\q\x\i\u\8\u\h\x\n\o\7\g\6\1\7\k\3\t\3\f\o\g\r\p\8\z\i\s\9\i\s\1\c\i\3\j\g\q\z\p\t\l\i\3\r\u\5\k\w\z\8\j\g\0\x\0\6\8\8\k\t\i\b\f\d\9\p\a\f\a\0\o\l\2\q\u\c\o\g\m\w\u\r\s\r\t\a\x\r\0\3\h\0\j\c\d\6\d\b\t\9\g\y\1\j\6\8\g\p\p\j\9\m\f\z\k\r\u\z\8\b\u\e\g\c\k\z\4\3\1\t\q\w\x\j\l\n\6\5\v\1\8\3\l\e\x\j\7\x\s\y\j\1\1\u\e\h\t\6\n\b\t\4\e\4\w\u\f\z\x\e\b\2\v\2\a\d\o\q\f\4\g\n\2\t\i\8\z\m\c\o\d\b\5\f\f\b\l\0\u\9\v\9\4\a\j\k\6\4\x\o\7\e\v\r\1\t\q\w\s\z\x\o\k\3\v\j\q\n\h\o\5\z\c\e\v\u\l\v\o\g\4\c\u\f\2\j\v\4\p\g\e\l\w\i\m\5\r\i\c\r\o\z\i\g\c\v\v\o\m\m\2\1\c\2\8\u\s\k\b\x\x\1\r\j\k\7\n\q\s\o\t\9\z\g\o\6\r\3\t\a\c\6\j\r\d\4\c\d\x\p\x\l\d\3\t\g\n\0\7\q\4\2\p\s\s\v\9\8\m\x\4\j\8\b\7\z\e\j\m\r\q\9\s\q\7\2\d\2\r\s\2\l\q\s\h\9\8\5\z\4\b\g\g\l\0\d\f\y\4\v\w\c\c\5\u\1\0\p\6\o\a\v\b\z\8\y\3\1\i\z\9\6\x\1\o\j\v\y\c\n\t\r\y\j\6\s\0\d\6\n\g\y\h\w\2\o\e\t\3\9\v\0\h\c\w\t\0\z\t\3\s\a\s\h\h\a\z\y\2\b\w\a\n\z\d\1\v\g\p\2\f\7\3\r\p\z\a\e\g\n\3\x\w\v\h\v\2\p\t\1\q\9\w\w\n\3\7\h\p\t\6\q\c\w\o\9\o\q\5\4\x\0\r\m\7\6\b\0\9\9\v\j\7\m\n\x\r\g\d\i\5\x\0\a\5\z\h\9\r\w\f\d\e\c\f\s\1\5\w\n\x\p\j\e\e\f\8\9\a\2\i\r\4\y\f\3\i ]] 00:06:53.199 15:05:22 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:53.459 15:05:22 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:53.459 15:05:22 -- dd/uring.sh@75 -- # gen_conf 00:06:53.459 15:05:22 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.459 15:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:53.459 { 00:06:53.459 "subsystems": [ 00:06:53.459 { 00:06:53.459 "subsystem": "bdev", 00:06:53.459 "config": [ 00:06:53.459 { 00:06:53.459 "params": { 00:06:53.459 "block_size": 512, 00:06:53.459 "num_blocks": 1048576, 00:06:53.459 "name": "malloc0" 00:06:53.459 }, 00:06:53.459 "method": "bdev_malloc_create" 00:06:53.459 }, 00:06:53.459 { 00:06:53.459 "params": { 00:06:53.459 "filename": "/dev/zram1", 00:06:53.459 "name": "uring0" 00:06:53.459 }, 00:06:53.459 "method": "bdev_uring_create" 00:06:53.459 }, 00:06:53.459 { 00:06:53.459 "method": "bdev_wait_for_examine" 00:06:53.459 } 00:06:53.459 ] 00:06:53.459 } 00:06:53.459 ] 00:06:53.459 } 00:06:53.459 [2024-11-06 15:05:22.635544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.459 [2024-11-06 15:05:22.635649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:06:53.718 [2024-11-06 15:05:22.772198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.718 [2024-11-06 15:05:22.820795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.096  [2024-11-06T15:05:25.309Z] Copying: 180/512 [MB] (180 MBps) [2024-11-06T15:05:25.876Z] Copying: 361/512 [MB] (180 MBps) [2024-11-06T15:05:26.136Z] Copying: 512/512 [MB] (average 180 MBps) 00:06:56.861 00:06:56.861 15:05:26 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:56.861 15:05:26 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:56.861 15:05:26 -- dd/uring.sh@87 -- # : 00:06:56.861 15:05:26 -- dd/uring.sh@87 -- # : 00:06:56.861 15:05:26 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:56.861 15:05:26 -- dd/uring.sh@87 -- # gen_conf 00:06:56.861 15:05:26 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.861 15:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:56.861 [2024-11-06 15:05:26.106128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.861 [2024-11-06 15:05:26.106216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 00:06:56.861 { 00:06:56.861 "subsystems": [ 00:06:56.861 { 00:06:56.861 "subsystem": "bdev", 00:06:56.861 "config": [ 00:06:56.861 { 00:06:56.861 "params": { 00:06:56.861 "block_size": 512, 00:06:56.861 "num_blocks": 1048576, 00:06:56.861 "name": "malloc0" 00:06:56.861 }, 00:06:56.861 "method": "bdev_malloc_create" 00:06:56.861 }, 00:06:56.861 { 00:06:56.861 "params": { 00:06:56.861 "filename": "/dev/zram1", 00:06:56.861 "name": "uring0" 00:06:56.861 }, 00:06:56.861 "method": "bdev_uring_create" 00:06:56.861 }, 00:06:56.861 { 00:06:56.861 "params": { 00:06:56.861 "name": "uring0" 00:06:56.861 }, 00:06:56.861 "method": "bdev_uring_delete" 00:06:56.861 }, 00:06:56.861 { 00:06:56.861 "method": "bdev_wait_for_examine" 00:06:56.861 } 00:06:56.861 ] 00:06:56.861 } 00:06:56.861 ] 00:06:56.861 } 00:06:57.120 [2024-11-06 15:05:26.234076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.120 [2024-11-06 15:05:26.281859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.379  [2024-11-06T15:05:26.912Z] Copying: 0/0 [B] (average 0 Bps) 00:06:57.637 00:06:57.637 15:05:26 -- dd/uring.sh@94 -- # : 00:06:57.637 15:05:26 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:57.637 15:05:26 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.637 15:05:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:57.637 15:05:26 -- dd/uring.sh@94 -- # gen_conf 00:06:57.637 15:05:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.637 15:05:26 -- dd/common.sh@31 -- # xtrace_disable 00:06:57.637 15:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:57.637 15:05:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.637 15:05:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.637 15:05:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.637 15:05:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.637 15:05:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.637 15:05:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.637 15:05:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.637 15:05:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:57.637 [2024-11-06 15:05:26.811761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.637 [2024-11-06 15:05:26.811881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:06:57.637 { 00:06:57.637 "subsystems": [ 00:06:57.637 { 00:06:57.637 "subsystem": "bdev", 00:06:57.637 "config": [ 00:06:57.637 { 00:06:57.637 "params": { 00:06:57.637 "block_size": 512, 00:06:57.637 "num_blocks": 1048576, 00:06:57.637 "name": "malloc0" 00:06:57.637 }, 00:06:57.637 "method": "bdev_malloc_create" 00:06:57.637 }, 00:06:57.637 { 00:06:57.637 "params": { 00:06:57.637 "filename": "/dev/zram1", 00:06:57.637 "name": "uring0" 00:06:57.637 }, 00:06:57.637 "method": "bdev_uring_create" 00:06:57.637 }, 00:06:57.637 { 00:06:57.637 "params": { 00:06:57.637 "name": "uring0" 00:06:57.637 }, 00:06:57.637 "method": "bdev_uring_delete" 00:06:57.637 }, 00:06:57.637 { 00:06:57.637 "method": "bdev_wait_for_examine" 00:06:57.637 } 00:06:57.637 ] 00:06:57.637 } 00:06:57.637 ] 00:06:57.637 } 00:06:57.896 [2024-11-06 15:05:26.944887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.896 [2024-11-06 15:05:26.995381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.896 [2024-11-06 15:05:27.136585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:57.896 [2024-11-06 15:05:27.136636] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:57.896 [2024-11-06 15:05:27.136664] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:06:57.896 [2024-11-06 15:05:27.136705] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.156 [2024-11-06 15:05:27.303059] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:58.156 15:05:27 -- common/autotest_common.sh@653 -- # es=237 00:06:58.156 15:05:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.156 15:05:27 -- common/autotest_common.sh@662 -- # es=109 00:06:58.156 15:05:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.156 15:05:27 -- common/autotest_common.sh@670 -- # es=1 00:06:58.156 15:05:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.156 15:05:27 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:58.156 15:05:27 -- dd/common.sh@172 -- # local id=1 00:06:58.156 15:05:27 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:06:58.156 15:05:27 -- dd/common.sh@176 -- # echo 1 00:06:58.156 15:05:27 -- dd/common.sh@177 -- # echo 1 00:06:58.156 15:05:27 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:58.414 00:06:58.414 real 0m13.243s 00:06:58.414 user 0m7.584s 00:06:58.414 ************************************ 00:06:58.414 END TEST dd_uring_copy 00:06:58.414 ************************************ 00:06:58.415 sys 0m5.091s 00:06:58.415 15:05:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.415 15:05:27 -- common/autotest_common.sh@10 -- # set +x 00:06:58.674 ************************************ 00:06:58.674 END TEST spdk_dd_uring 00:06:58.674 ************************************ 00:06:58.674 00:06:58.674 real 0m13.457s 00:06:58.674 user 0m7.703s 00:06:58.674 sys 0m5.191s 00:06:58.674 15:05:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.674 15:05:27 -- common/autotest_common.sh@10 -- # set +x 00:06:58.674 15:05:27 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:58.674 15:05:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.674 15:05:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.674 15:05:27 -- common/autotest_common.sh@10 -- # set +x 00:06:58.674 ************************************ 00:06:58.674 START TEST spdk_dd_sparse 00:06:58.674 ************************************ 00:06:58.674 15:05:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:58.674 * Looking for test storage... 00:06:58.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.674 15:05:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:58.674 15:05:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:58.674 15:05:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:58.933 15:05:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:58.933 15:05:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:58.933 15:05:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:58.933 15:05:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:58.933 15:05:27 -- scripts/common.sh@335 -- # IFS=.-: 00:06:58.933 15:05:27 -- scripts/common.sh@335 -- # read -ra ver1 00:06:58.933 15:05:27 -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.933 15:05:27 -- scripts/common.sh@336 -- # read -ra ver2 00:06:58.933 15:05:27 -- scripts/common.sh@337 -- # local 'op=<' 00:06:58.933 15:05:27 -- scripts/common.sh@339 -- # ver1_l=2 00:06:58.933 15:05:27 -- scripts/common.sh@340 -- # ver2_l=1 00:06:58.933 15:05:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:58.933 15:05:27 -- scripts/common.sh@343 -- # case "$op" in 00:06:58.933 15:05:27 -- scripts/common.sh@344 -- # : 1 00:06:58.933 15:05:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:58.933 15:05:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.933 15:05:27 -- scripts/common.sh@364 -- # decimal 1 00:06:58.933 15:05:27 -- scripts/common.sh@352 -- # local d=1 00:06:58.933 15:05:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.933 15:05:27 -- scripts/common.sh@354 -- # echo 1 00:06:58.933 15:05:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:58.933 15:05:27 -- scripts/common.sh@365 -- # decimal 2 00:06:58.933 15:05:27 -- scripts/common.sh@352 -- # local d=2 00:06:58.933 15:05:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.933 15:05:27 -- scripts/common.sh@354 -- # echo 2 00:06:58.933 15:05:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:58.933 15:05:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:58.933 15:05:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:58.933 15:05:27 -- scripts/common.sh@367 -- # return 0 00:06:58.933 15:05:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.933 15:05:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:58.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.933 --rc genhtml_branch_coverage=1 00:06:58.933 --rc genhtml_function_coverage=1 00:06:58.933 --rc genhtml_legend=1 00:06:58.933 --rc geninfo_all_blocks=1 00:06:58.933 --rc geninfo_unexecuted_blocks=1 00:06:58.933 00:06:58.933 ' 00:06:58.933 15:05:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:58.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.933 --rc genhtml_branch_coverage=1 00:06:58.933 --rc genhtml_function_coverage=1 00:06:58.933 --rc genhtml_legend=1 00:06:58.933 --rc geninfo_all_blocks=1 00:06:58.933 --rc geninfo_unexecuted_blocks=1 00:06:58.933 00:06:58.933 ' 00:06:58.934 15:05:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:58.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.934 --rc genhtml_branch_coverage=1 00:06:58.934 --rc genhtml_function_coverage=1 00:06:58.934 --rc genhtml_legend=1 00:06:58.934 --rc geninfo_all_blocks=1 00:06:58.934 --rc geninfo_unexecuted_blocks=1 00:06:58.934 00:06:58.934 ' 00:06:58.934 15:05:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:58.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.934 --rc genhtml_branch_coverage=1 00:06:58.934 --rc genhtml_function_coverage=1 00:06:58.934 --rc genhtml_legend=1 00:06:58.934 --rc geninfo_all_blocks=1 00:06:58.934 --rc geninfo_unexecuted_blocks=1 00:06:58.934 00:06:58.934 ' 00:06:58.934 15:05:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.934 15:05:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.934 15:05:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.934 15:05:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.934 15:05:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.934 15:05:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.934 15:05:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.934 15:05:27 -- paths/export.sh@5 -- # export PATH 00:06:58.934 15:05:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.934 15:05:28 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:58.934 15:05:28 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:58.934 15:05:28 -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:58.934 15:05:28 -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:58.934 15:05:28 -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:58.934 15:05:28 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:58.934 15:05:28 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:58.934 15:05:28 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:58.934 15:05:28 -- dd/sparse.sh@118 -- # prepare 00:06:58.934 15:05:28 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:58.934 15:05:28 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:58.934 1+0 records in 00:06:58.934 1+0 records out 00:06:58.934 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00651345 s, 644 MB/s 00:06:58.934 15:05:28 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:58.934 1+0 records in 00:06:58.934 1+0 records out 00:06:58.934 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00671441 s, 625 MB/s 00:06:58.934 15:05:28 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:58.934 1+0 records in 00:06:58.934 1+0 records out 00:06:58.934 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00608849 s, 689 MB/s 00:06:58.934 15:05:28 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:58.934 15:05:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.934 15:05:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.934 15:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.934 ************************************ 00:06:58.934 START TEST dd_sparse_file_to_file 00:06:58.934 ************************************ 00:06:58.934 15:05:28 -- common/autotest_common.sh@1114 -- # file_to_file 00:06:58.934 15:05:28 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:58.934 15:05:28 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:58.934 15:05:28 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:58.934 15:05:28 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:58.934 15:05:28 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:58.934 15:05:28 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:58.934 15:05:28 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:58.934 15:05:28 -- dd/sparse.sh@41 -- # gen_conf 00:06:58.934 15:05:28 -- dd/common.sh@31 -- # xtrace_disable 00:06:58.934 15:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.934 [2024-11-06 15:05:28.099814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.934 [2024-11-06 15:05:28.100068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59358 ] 00:06:58.934 { 00:06:58.934 "subsystems": [ 00:06:58.934 { 00:06:58.934 "subsystem": "bdev", 00:06:58.934 "config": [ 00:06:58.934 { 00:06:58.934 "params": { 00:06:58.934 "block_size": 4096, 00:06:58.934 "filename": "dd_sparse_aio_disk", 00:06:58.934 "name": "dd_aio" 00:06:58.934 }, 00:06:58.934 "method": "bdev_aio_create" 00:06:58.934 }, 00:06:58.934 { 00:06:58.934 "params": { 00:06:58.934 "lvs_name": "dd_lvstore", 00:06:58.934 "bdev_name": "dd_aio" 00:06:58.934 }, 00:06:58.934 "method": "bdev_lvol_create_lvstore" 00:06:58.934 }, 00:06:58.934 { 00:06:58.934 "method": "bdev_wait_for_examine" 00:06:58.934 } 00:06:58.934 ] 00:06:58.934 } 00:06:58.934 ] 00:06:58.934 } 00:06:59.193 [2024-11-06 15:05:28.237539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.193 [2024-11-06 15:05:28.290820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.193  [2024-11-06T15:05:28.727Z] Copying: 12/36 [MB] (average 1333 MBps) 00:06:59.452 00:06:59.452 15:05:28 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:59.452 15:05:28 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:59.452 15:05:28 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:59.452 15:05:28 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:59.452 15:05:28 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:59.452 15:05:28 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:59.452 15:05:28 -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:59.452 15:05:28 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:59.452 15:05:28 -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:59.452 15:05:28 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:59.452 00:06:59.452 real 0m0.565s 00:06:59.452 user 0m0.339s 00:06:59.452 sys 0m0.129s 00:06:59.452 15:05:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.452 15:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:59.452 ************************************ 00:06:59.452 END TEST dd_sparse_file_to_file 00:06:59.452 ************************************ 00:06:59.452 15:05:28 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:59.452 15:05:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.452 15:05:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.452 15:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:59.452 ************************************ 00:06:59.452 START TEST dd_sparse_file_to_bdev 00:06:59.452 ************************************ 00:06:59.452 15:05:28 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:06:59.452 15:05:28 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:59.452 15:05:28 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:59.452 15:05:28 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:06:59.452 15:05:28 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:59.452 15:05:28 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:59.452 15:05:28 -- dd/sparse.sh@73 -- # gen_conf 00:06:59.452 15:05:28 -- dd/common.sh@31 -- # xtrace_disable 00:06:59.452 15:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:59.452 [2024-11-06 15:05:28.719231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.452 [2024-11-06 15:05:28.719315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59404 ] 00:06:59.711 { 00:06:59.711 "subsystems": [ 00:06:59.711 { 00:06:59.711 "subsystem": "bdev", 00:06:59.711 "config": [ 00:06:59.711 { 00:06:59.711 "params": { 00:06:59.711 "block_size": 4096, 00:06:59.711 "filename": "dd_sparse_aio_disk", 00:06:59.711 "name": "dd_aio" 00:06:59.711 }, 00:06:59.711 "method": "bdev_aio_create" 00:06:59.711 }, 00:06:59.711 { 00:06:59.711 "params": { 00:06:59.711 "lvs_name": "dd_lvstore", 00:06:59.711 "lvol_name": "dd_lvol", 00:06:59.711 "size": 37748736, 00:06:59.711 "thin_provision": true 00:06:59.711 }, 00:06:59.711 "method": "bdev_lvol_create" 00:06:59.711 }, 00:06:59.711 { 00:06:59.711 "method": "bdev_wait_for_examine" 00:06:59.711 } 00:06:59.711 ] 00:06:59.711 } 00:06:59.711 ] 00:06:59.711 } 00:06:59.712 [2024-11-06 15:05:28.856620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.712 [2024-11-06 15:05:28.904570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.712 [2024-11-06 15:05:28.960708] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:06:59.970  [2024-11-06T15:05:29.245Z] Copying: 12/36 [MB] (average 352 MBps)[2024-11-06 15:05:29.010075] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:06:59.970 00:06:59.970 00:06:59.970 ************************************ 00:06:59.970 END TEST dd_sparse_file_to_bdev 00:06:59.970 ************************************ 00:06:59.970 00:06:59.970 real 0m0.546s 00:06:59.970 user 0m0.361s 00:06:59.970 sys 0m0.113s 00:06:59.970 15:05:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.970 15:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.229 15:05:29 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:00.229 15:05:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.229 15:05:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.229 15:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.229 ************************************ 00:07:00.229 START TEST dd_sparse_bdev_to_file 00:07:00.229 ************************************ 00:07:00.229 15:05:29 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:07:00.229 15:05:29 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:00.229 15:05:29 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:00.229 15:05:29 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:00.229 15:05:29 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:00.229 15:05:29 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:00.229 15:05:29 -- dd/sparse.sh@91 -- # gen_conf 00:07:00.229 15:05:29 -- dd/common.sh@31 -- # xtrace_disable 00:07:00.229 15:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.229 [2024-11-06 15:05:29.322290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.229 [2024-11-06 15:05:29.322395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59430 ] 00:07:00.229 { 00:07:00.229 "subsystems": [ 00:07:00.229 { 00:07:00.229 "subsystem": "bdev", 00:07:00.229 "config": [ 00:07:00.229 { 00:07:00.229 "params": { 00:07:00.229 "block_size": 4096, 00:07:00.229 "filename": "dd_sparse_aio_disk", 00:07:00.229 "name": "dd_aio" 00:07:00.229 }, 00:07:00.229 "method": "bdev_aio_create" 00:07:00.229 }, 00:07:00.229 { 00:07:00.229 "method": "bdev_wait_for_examine" 00:07:00.229 } 00:07:00.229 ] 00:07:00.229 } 00:07:00.229 ] 00:07:00.229 } 00:07:00.229 [2024-11-06 15:05:29.457862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.488 [2024-11-06 15:05:29.506436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.489  [2024-11-06T15:05:30.022Z] Copying: 12/36 [MB] (average 1333 MBps) 00:07:00.747 00:07:00.747 15:05:29 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:00.747 15:05:29 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:00.747 15:05:29 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:00.747 15:05:29 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:00.747 15:05:29 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:00.747 15:05:29 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:00.747 15:05:29 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:00.747 15:05:29 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:00.748 15:05:29 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:00.748 15:05:29 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:00.748 00:07:00.748 real 0m0.541s 00:07:00.748 user 0m0.329s 00:07:00.748 sys 0m0.131s 00:07:00.748 15:05:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.748 15:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.748 ************************************ 00:07:00.748 END TEST dd_sparse_bdev_to_file 00:07:00.748 ************************************ 00:07:00.748 15:05:29 -- dd/sparse.sh@1 -- # cleanup 00:07:00.748 15:05:29 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:00.748 15:05:29 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:00.748 15:05:29 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:00.748 15:05:29 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:00.748 00:07:00.748 real 0m2.117s 00:07:00.748 user 0m1.268s 00:07:00.748 sys 0m0.585s 00:07:00.748 15:05:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.748 ************************************ 00:07:00.748 END TEST spdk_dd_sparse 00:07:00.748 ************************************ 00:07:00.748 15:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.748 15:05:29 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:00.748 15:05:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.748 15:05:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.748 15:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.748 ************************************ 00:07:00.748 START TEST spdk_dd_negative 00:07:00.748 ************************************ 00:07:00.748 15:05:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:00.748 * Looking for test storage... 00:07:00.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.748 15:05:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:00.748 15:05:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:00.748 15:05:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.007 15:05:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.007 15:05:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:01.007 15:05:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:01.007 15:05:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:01.007 15:05:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:01.007 15:05:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:01.007 15:05:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.007 15:05:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:01.007 15:05:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:01.007 15:05:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:01.007 15:05:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:01.007 15:05:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:01.007 15:05:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:01.007 15:05:30 -- scripts/common.sh@344 -- # : 1 00:07:01.007 15:05:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:01.007 15:05:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.007 15:05:30 -- scripts/common.sh@364 -- # decimal 1 00:07:01.007 15:05:30 -- scripts/common.sh@352 -- # local d=1 00:07:01.007 15:05:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.007 15:05:30 -- scripts/common.sh@354 -- # echo 1 00:07:01.008 15:05:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:01.008 15:05:30 -- scripts/common.sh@365 -- # decimal 2 00:07:01.008 15:05:30 -- scripts/common.sh@352 -- # local d=2 00:07:01.008 15:05:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.008 15:05:30 -- scripts/common.sh@354 -- # echo 2 00:07:01.008 15:05:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:01.008 15:05:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:01.008 15:05:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:01.008 15:05:30 -- scripts/common.sh@367 -- # return 0 00:07:01.008 15:05:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.008 15:05:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.008 --rc genhtml_branch_coverage=1 00:07:01.008 --rc genhtml_function_coverage=1 00:07:01.008 --rc genhtml_legend=1 00:07:01.008 --rc geninfo_all_blocks=1 00:07:01.008 --rc geninfo_unexecuted_blocks=1 00:07:01.008 00:07:01.008 ' 00:07:01.008 15:05:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.008 --rc genhtml_branch_coverage=1 00:07:01.008 --rc genhtml_function_coverage=1 00:07:01.008 --rc genhtml_legend=1 00:07:01.008 --rc geninfo_all_blocks=1 00:07:01.008 --rc geninfo_unexecuted_blocks=1 00:07:01.008 00:07:01.008 ' 00:07:01.008 15:05:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.008 --rc genhtml_branch_coverage=1 00:07:01.008 --rc genhtml_function_coverage=1 00:07:01.008 --rc genhtml_legend=1 00:07:01.008 --rc geninfo_all_blocks=1 00:07:01.008 --rc geninfo_unexecuted_blocks=1 00:07:01.008 00:07:01.008 ' 00:07:01.008 15:05:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.008 --rc genhtml_branch_coverage=1 00:07:01.008 --rc genhtml_function_coverage=1 00:07:01.008 --rc genhtml_legend=1 00:07:01.008 --rc geninfo_all_blocks=1 00:07:01.008 --rc geninfo_unexecuted_blocks=1 00:07:01.008 00:07:01.008 ' 00:07:01.008 15:05:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.008 15:05:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.008 15:05:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.008 15:05:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.008 15:05:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.008 15:05:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.008 15:05:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.008 15:05:30 -- paths/export.sh@5 -- # export PATH 00:07:01.008 15:05:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.008 15:05:30 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.008 15:05:30 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.008 15:05:30 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.008 15:05:30 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.008 15:05:30 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:01.008 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.008 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.008 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.008 ************************************ 00:07:01.008 START TEST dd_invalid_arguments 00:07:01.008 ************************************ 00:07:01.008 15:05:30 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:07:01.008 15:05:30 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.008 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.008 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.008 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.008 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.008 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.008 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.008 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.008 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.008 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.008 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.008 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.008 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:01.008 options: 00:07:01.008 -c, --config JSON config file (default none) 00:07:01.008 --json JSON config file (default none) 00:07:01.008 --json-ignore-init-errors 00:07:01.008 don't exit on invalid config entry 00:07:01.008 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:01.008 -g, --single-file-segments 00:07:01.008 force creating just one hugetlbfs file 00:07:01.008 -h, --help show this usage 00:07:01.008 -i, --shm-id shared memory ID (optional) 00:07:01.008 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:01.008 --lcores lcore to CPU mapping list. The list is in the format: 00:07:01.008 [<,lcores[@CPUs]>...] 00:07:01.008 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:01.008 Within the group, '-' is used for range separator, 00:07:01.008 ',' is used for single number separator. 00:07:01.008 '( )' can be omitted for single element group, 00:07:01.008 '@' can be omitted if cpus and lcores have the same value 00:07:01.008 -n, --mem-channels channel number of memory channels used for DPDK 00:07:01.008 -p, --main-core main (primary) core for DPDK 00:07:01.008 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:01.008 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:01.008 --disable-cpumask-locks Disable CPU core lock files. 00:07:01.008 --silence-noticelog disable notice level logging to stderr 00:07:01.008 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:01.008 -u, --no-pci disable PCI access 00:07:01.008 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:01.008 --max-delay maximum reactor delay (in microseconds) 00:07:01.008 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:01.008 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:01.008 -R, --huge-unlink unlink huge files after initialization 00:07:01.008 -v, --version print SPDK version 00:07:01.008 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:01.008 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:01.008 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:01.008 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:01.008 Tracepoints vary in size and can use more than one trace entry. 00:07:01.008 --rpcs-allowed comma-separated list of permitted RPCS 00:07:01.008 --env-context Opaque context for use of the env implementation 00:07:01.008 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:01.008 --no-huge run without using hugepages 00:07:01.008 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:01.008 -e, --tpoint-group [:] 00:07:01.008 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:07:01.008 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:01.008 [2024-11-06 15:05:30.180010] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:07:01.009 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:01.009 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:01.009 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:01.009 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:01.009 [--------- DD Options ---------] 00:07:01.009 --if Input file. Must specify either --if or --ib. 00:07:01.009 --ib Input bdev. Must specifier either --if or --ib 00:07:01.009 --of Output file. Must specify either --of or --ob. 00:07:01.009 --ob Output bdev. Must specify either --of or --ob. 00:07:01.009 --iflag Input file flags. 00:07:01.009 --oflag Output file flags. 00:07:01.009 --bs I/O unit size (default: 4096) 00:07:01.009 --qd Queue depth (default: 2) 00:07:01.009 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:01.009 --skip Skip this many I/O units at start of input. (default: 0) 00:07:01.009 --seek Skip this many I/O units at start of output. (default: 0) 00:07:01.009 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:01.009 --sparse Enable hole skipping in input target 00:07:01.009 Available iflag and oflag values: 00:07:01.009 append - append mode 00:07:01.009 direct - use direct I/O for data 00:07:01.009 directory - fail unless a directory 00:07:01.009 dsync - use synchronized I/O for data 00:07:01.009 noatime - do not update access time 00:07:01.009 noctty - do not assign controlling terminal from file 00:07:01.009 nofollow - do not follow symlinks 00:07:01.009 nonblock - use non-blocking I/O 00:07:01.009 sync - use synchronized I/O for data and metadata 00:07:01.009 15:05:30 -- common/autotest_common.sh@653 -- # es=2 00:07:01.009 15:05:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.009 15:05:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.009 15:05:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.009 00:07:01.009 real 0m0.071s 00:07:01.009 user 0m0.044s 00:07:01.009 sys 0m0.026s 00:07:01.009 15:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.009 ************************************ 00:07:01.009 END TEST dd_invalid_arguments 00:07:01.009 ************************************ 00:07:01.009 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.009 15:05:30 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:01.009 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.009 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.009 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.009 ************************************ 00:07:01.009 START TEST dd_double_input 00:07:01.009 ************************************ 00:07:01.009 15:05:30 -- common/autotest_common.sh@1114 -- # double_input 00:07:01.009 15:05:30 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.009 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.009 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.009 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.009 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.009 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.009 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.009 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.009 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.009 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.009 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.009 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.268 [2024-11-06 15:05:30.302964] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:01.268 15:05:30 -- common/autotest_common.sh@653 -- # es=22 00:07:01.268 15:05:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.268 15:05:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.268 15:05:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.268 00:07:01.268 real 0m0.070s 00:07:01.268 user 0m0.046s 00:07:01.268 sys 0m0.024s 00:07:01.268 15:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.268 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.268 ************************************ 00:07:01.268 END TEST dd_double_input 00:07:01.268 ************************************ 00:07:01.268 15:05:30 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:01.268 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.268 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.268 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.268 ************************************ 00:07:01.268 START TEST dd_double_output 00:07:01.268 ************************************ 00:07:01.268 15:05:30 -- common/autotest_common.sh@1114 -- # double_output 00:07:01.268 15:05:30 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:01.268 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.268 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:01.268 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.268 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.268 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.268 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:01.268 [2024-11-06 15:05:30.427453] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:01.268 15:05:30 -- common/autotest_common.sh@653 -- # es=22 00:07:01.268 15:05:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.268 15:05:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.268 15:05:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.268 00:07:01.268 real 0m0.074s 00:07:01.268 user 0m0.045s 00:07:01.268 sys 0m0.028s 00:07:01.268 ************************************ 00:07:01.268 END TEST dd_double_output 00:07:01.268 ************************************ 00:07:01.268 15:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.268 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.268 15:05:30 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:01.268 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.268 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.268 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.268 ************************************ 00:07:01.268 START TEST dd_no_input 00:07:01.268 ************************************ 00:07:01.268 15:05:30 -- common/autotest_common.sh@1114 -- # no_input 00:07:01.268 15:05:30 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.268 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.268 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.268 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.268 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.268 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.268 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.268 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.528 [2024-11-06 15:05:30.545794] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:07:01.528 15:05:30 -- common/autotest_common.sh@653 -- # es=22 00:07:01.528 15:05:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.528 15:05:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.528 15:05:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.528 00:07:01.528 real 0m0.070s 00:07:01.528 user 0m0.044s 00:07:01.528 sys 0m0.025s 00:07:01.528 15:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.528 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.528 ************************************ 00:07:01.528 END TEST dd_no_input 00:07:01.528 ************************************ 00:07:01.528 15:05:30 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:01.528 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.528 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.528 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.528 ************************************ 00:07:01.528 START TEST dd_no_output 00:07:01.528 ************************************ 00:07:01.528 15:05:30 -- common/autotest_common.sh@1114 -- # no_output 00:07:01.528 15:05:30 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.528 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.528 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.528 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.528 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.528 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.528 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.528 [2024-11-06 15:05:30.655247] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:07:01.528 15:05:30 -- common/autotest_common.sh@653 -- # es=22 00:07:01.528 15:05:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.528 15:05:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.528 15:05:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.528 00:07:01.528 real 0m0.054s 00:07:01.528 user 0m0.032s 00:07:01.528 sys 0m0.022s 00:07:01.528 15:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.528 ************************************ 00:07:01.528 END TEST dd_no_output 00:07:01.528 ************************************ 00:07:01.528 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.528 15:05:30 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:01.528 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.528 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.528 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.528 ************************************ 00:07:01.528 START TEST dd_wrong_blocksize 00:07:01.528 ************************************ 00:07:01.528 15:05:30 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:07:01.528 15:05:30 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.528 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.528 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.528 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.528 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.528 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.528 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.528 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.528 [2024-11-06 15:05:30.762892] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:07:01.528 15:05:30 -- common/autotest_common.sh@653 -- # es=22 00:07:01.528 15:05:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.528 15:05:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.528 15:05:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.528 00:07:01.528 real 0m0.056s 00:07:01.528 user 0m0.040s 00:07:01.528 sys 0m0.015s 00:07:01.528 15:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.528 ************************************ 00:07:01.528 END TEST dd_wrong_blocksize 00:07:01.528 ************************************ 00:07:01.528 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.787 15:05:30 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:01.787 15:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.787 15:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.787 15:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:01.787 ************************************ 00:07:01.787 START TEST dd_smaller_blocksize 00:07:01.787 ************************************ 00:07:01.787 15:05:30 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:07:01.787 15:05:30 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:01.787 15:05:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.787 15:05:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:01.787 15:05:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.788 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.788 15:05:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.788 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.788 15:05:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.788 15:05:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.788 15:05:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.788 15:05:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.788 15:05:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:01.788 [2024-11-06 15:05:30.880635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.788 [2024-11-06 15:05:30.880747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59659 ] 00:07:01.788 [2024-11-06 15:05:31.019521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.047 [2024-11-06 15:05:31.086899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.306 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:02.306 [2024-11-06 15:05:31.392293] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:02.306 [2024-11-06 15:05:31.392365] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.306 [2024-11-06 15:05:31.462888] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:02.306 15:05:31 -- common/autotest_common.sh@653 -- # es=244 00:07:02.306 15:05:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.306 15:05:31 -- common/autotest_common.sh@662 -- # es=116 00:07:02.306 15:05:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.306 15:05:31 -- common/autotest_common.sh@670 -- # es=1 00:07:02.306 15:05:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.306 00:07:02.306 real 0m0.729s 00:07:02.306 user 0m0.343s 00:07:02.306 sys 0m0.280s 00:07:02.306 ************************************ 00:07:02.306 END TEST dd_smaller_blocksize 00:07:02.306 ************************************ 00:07:02.306 15:05:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.306 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 15:05:31 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:02.565 15:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.565 15:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.565 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 ************************************ 00:07:02.565 START TEST dd_invalid_count 00:07:02.565 ************************************ 00:07:02.565 15:05:31 -- common/autotest_common.sh@1114 -- # invalid_count 00:07:02.565 15:05:31 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:02.565 15:05:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.565 15:05:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:02.565 15:05:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.565 15:05:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.565 15:05:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.565 15:05:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:02.565 [2024-11-06 15:05:31.669001] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:07:02.565 15:05:31 -- common/autotest_common.sh@653 -- # es=22 00:07:02.565 15:05:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.565 15:05:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.565 15:05:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.565 00:07:02.565 real 0m0.070s 00:07:02.565 user 0m0.051s 00:07:02.565 sys 0m0.018s 00:07:02.565 15:05:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.565 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 ************************************ 00:07:02.565 END TEST dd_invalid_count 00:07:02.565 ************************************ 00:07:02.565 15:05:31 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:02.565 15:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.565 15:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.565 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 ************************************ 00:07:02.565 START TEST dd_invalid_oflag 00:07:02.565 ************************************ 00:07:02.565 15:05:31 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:07:02.565 15:05:31 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:02.565 15:05:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.565 15:05:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:02.565 15:05:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.565 15:05:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.565 15:05:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.565 15:05:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.565 15:05:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:02.565 [2024-11-06 15:05:31.789842] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:07:02.565 15:05:31 -- common/autotest_common.sh@653 -- # es=22 00:07:02.565 15:05:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.565 15:05:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.565 15:05:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.565 00:07:02.565 real 0m0.070s 00:07:02.565 user 0m0.039s 00:07:02.565 sys 0m0.030s 00:07:02.565 15:05:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.565 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 ************************************ 00:07:02.565 END TEST dd_invalid_oflag 00:07:02.565 ************************************ 00:07:02.825 15:05:31 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:02.825 15:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.825 15:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.825 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.825 ************************************ 00:07:02.825 START TEST dd_invalid_iflag 00:07:02.825 ************************************ 00:07:02.825 15:05:31 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:07:02.825 15:05:31 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:02.825 15:05:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.825 15:05:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:02.825 15:05:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.825 15:05:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.825 15:05:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.825 15:05:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:02.825 [2024-11-06 15:05:31.909478] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:07:02.825 15:05:31 -- common/autotest_common.sh@653 -- # es=22 00:07:02.825 15:05:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.825 15:05:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.825 15:05:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.825 ************************************ 00:07:02.825 00:07:02.825 real 0m0.066s 00:07:02.825 user 0m0.037s 00:07:02.825 sys 0m0.028s 00:07:02.825 15:05:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.825 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.825 END TEST dd_invalid_iflag 00:07:02.825 ************************************ 00:07:02.825 15:05:31 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:02.825 15:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.825 15:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.825 15:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.825 ************************************ 00:07:02.825 START TEST dd_unknown_flag 00:07:02.825 ************************************ 00:07:02.825 15:05:31 -- common/autotest_common.sh@1114 -- # unknown_flag 00:07:02.825 15:05:31 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:02.825 15:05:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.825 15:05:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:02.825 15:05:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.825 15:05:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.825 15:05:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.825 15:05:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.825 15:05:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:02.825 [2024-11-06 15:05:32.024827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.825 [2024-11-06 15:05:32.024923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:07:03.084 [2024-11-06 15:05:32.155354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.084 [2024-11-06 15:05:32.207613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.084 [2024-11-06 15:05:32.251953] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:07:03.084 [2024-11-06 15:05:32.252041] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:03.084 [2024-11-06 15:05:32.252052] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:03.084 [2024-11-06 15:05:32.252062] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.084 [2024-11-06 15:05:32.309789] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:03.343 15:05:32 -- common/autotest_common.sh@653 -- # es=236 00:07:03.343 15:05:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.343 15:05:32 -- common/autotest_common.sh@662 -- # es=108 00:07:03.343 15:05:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.343 15:05:32 -- common/autotest_common.sh@670 -- # es=1 00:07:03.343 15:05:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.343 00:07:03.343 real 0m0.424s 00:07:03.343 user 0m0.228s 00:07:03.343 sys 0m0.093s 00:07:03.343 15:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.343 ************************************ 00:07:03.343 END TEST dd_unknown_flag 00:07:03.343 ************************************ 00:07:03.343 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.343 15:05:32 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:03.343 15:05:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.343 15:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.343 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.343 ************************************ 00:07:03.343 START TEST dd_invalid_json 00:07:03.343 ************************************ 00:07:03.343 15:05:32 -- common/autotest_common.sh@1114 -- # invalid_json 00:07:03.343 15:05:32 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.343 15:05:32 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.343 15:05:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.343 15:05:32 -- dd/negative_dd.sh@95 -- # : 00:07:03.343 15:05:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.343 15:05:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.343 15:05:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.343 15:05:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.343 15:05:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.343 15:05:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.343 15:05:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.343 15:05:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.343 15:05:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.343 [2024-11-06 15:05:32.510606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.343 [2024-11-06 15:05:32.510723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59781 ] 00:07:03.602 [2024-11-06 15:05:32.647522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.602 [2024-11-06 15:05:32.695356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.602 [2024-11-06 15:05:32.695542] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:07:03.602 [2024-11-06 15:05:32.695560] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.602 [2024-11-06 15:05:32.695598] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:03.602 15:05:32 -- common/autotest_common.sh@653 -- # es=234 00:07:03.602 15:05:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.602 15:05:32 -- common/autotest_common.sh@662 -- # es=106 00:07:03.602 15:05:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.602 15:05:32 -- common/autotest_common.sh@670 -- # es=1 00:07:03.602 15:05:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.602 00:07:03.602 real 0m0.323s 00:07:03.602 user 0m0.175s 00:07:03.602 sys 0m0.047s 00:07:03.602 15:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.602 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.602 ************************************ 00:07:03.602 END TEST dd_invalid_json 00:07:03.602 ************************************ 00:07:03.602 ************************************ 00:07:03.602 END TEST spdk_dd_negative 00:07:03.602 ************************************ 00:07:03.602 00:07:03.602 real 0m2.896s 00:07:03.602 user 0m1.414s 00:07:03.602 sys 0m1.106s 00:07:03.602 15:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.602 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.602 00:07:03.602 real 1m5.187s 00:07:03.602 user 0m40.715s 00:07:03.602 sys 0m15.349s 00:07:03.602 15:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.602 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.602 ************************************ 00:07:03.602 END TEST spdk_dd 00:07:03.602 ************************************ 00:07:03.861 15:05:32 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:03.861 15:05:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.861 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.861 15:05:32 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:03.861 15:05:32 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:03.861 15:05:32 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.861 15:05:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:03.861 15:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.861 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:03.861 ************************************ 00:07:03.861 START TEST nvmf_tcp 00:07:03.861 ************************************ 00:07:03.861 15:05:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.861 * Looking for test storage... 00:07:03.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:03.861 15:05:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:03.861 15:05:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:03.861 15:05:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:03.861 15:05:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:03.861 15:05:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:03.861 15:05:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:03.861 15:05:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:03.861 15:05:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:03.862 15:05:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:03.862 15:05:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.862 15:05:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:03.862 15:05:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:03.862 15:05:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:03.862 15:05:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:03.862 15:05:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:03.862 15:05:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:03.862 15:05:33 -- scripts/common.sh@344 -- # : 1 00:07:03.862 15:05:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:03.862 15:05:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.862 15:05:33 -- scripts/common.sh@364 -- # decimal 1 00:07:03.862 15:05:33 -- scripts/common.sh@352 -- # local d=1 00:07:03.862 15:05:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.862 15:05:33 -- scripts/common.sh@354 -- # echo 1 00:07:03.862 15:05:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:03.862 15:05:33 -- scripts/common.sh@365 -- # decimal 2 00:07:03.862 15:05:33 -- scripts/common.sh@352 -- # local d=2 00:07:03.862 15:05:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.862 15:05:33 -- scripts/common.sh@354 -- # echo 2 00:07:03.862 15:05:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:03.862 15:05:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:03.862 15:05:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:03.862 15:05:33 -- scripts/common.sh@367 -- # return 0 00:07:03.862 15:05:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.862 15:05:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:03.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.862 --rc genhtml_branch_coverage=1 00:07:03.862 --rc genhtml_function_coverage=1 00:07:03.862 --rc genhtml_legend=1 00:07:03.862 --rc geninfo_all_blocks=1 00:07:03.862 --rc geninfo_unexecuted_blocks=1 00:07:03.862 00:07:03.862 ' 00:07:03.862 15:05:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:03.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.862 --rc genhtml_branch_coverage=1 00:07:03.862 --rc genhtml_function_coverage=1 00:07:03.862 --rc genhtml_legend=1 00:07:03.862 --rc geninfo_all_blocks=1 00:07:03.862 --rc geninfo_unexecuted_blocks=1 00:07:03.862 00:07:03.862 ' 00:07:03.862 15:05:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:03.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.862 --rc genhtml_branch_coverage=1 00:07:03.862 --rc genhtml_function_coverage=1 00:07:03.862 --rc genhtml_legend=1 00:07:03.862 --rc geninfo_all_blocks=1 00:07:03.862 --rc geninfo_unexecuted_blocks=1 00:07:03.862 00:07:03.862 ' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.122 --rc genhtml_branch_coverage=1 00:07:04.122 --rc genhtml_function_coverage=1 00:07:04.122 --rc genhtml_legend=1 00:07:04.122 --rc geninfo_all_blocks=1 00:07:04.122 --rc geninfo_unexecuted_blocks=1 00:07:04.122 00:07:04.122 ' 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.122 15:05:33 -- nvmf/common.sh@7 -- # uname -s 00:07:04.122 15:05:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.122 15:05:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.122 15:05:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.122 15:05:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.122 15:05:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.122 15:05:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.122 15:05:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.122 15:05:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.122 15:05:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.122 15:05:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.122 15:05:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:07:04.122 15:05:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:07:04.122 15:05:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.122 15:05:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.122 15:05:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.122 15:05:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.122 15:05:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.122 15:05:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.122 15:05:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.122 15:05:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.122 15:05:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.122 15:05:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.122 15:05:33 -- paths/export.sh@5 -- # export PATH 00:07:04.122 15:05:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.122 15:05:33 -- nvmf/common.sh@46 -- # : 0 00:07:04.122 15:05:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.122 15:05:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.122 15:05:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.122 15:05:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.122 15:05:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.122 15:05:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.122 15:05:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.122 15:05:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:04.122 15:05:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.122 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:04.122 15:05:33 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:04.122 15:05:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.122 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:04.122 ************************************ 00:07:04.122 START TEST nvmf_host_management 00:07:04.122 ************************************ 00:07:04.122 15:05:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:04.122 * Looking for test storage... 00:07:04.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.122 15:05:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.122 15:05:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.122 15:05:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.122 15:05:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.122 15:05:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.122 15:05:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.122 15:05:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.122 15:05:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.122 15:05:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.122 15:05:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.122 15:05:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.122 15:05:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.122 15:05:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.122 15:05:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.122 15:05:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.122 15:05:33 -- scripts/common.sh@344 -- # : 1 00:07:04.122 15:05:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.122 15:05:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.122 15:05:33 -- scripts/common.sh@364 -- # decimal 1 00:07:04.122 15:05:33 -- scripts/common.sh@352 -- # local d=1 00:07:04.122 15:05:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.122 15:05:33 -- scripts/common.sh@354 -- # echo 1 00:07:04.122 15:05:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.122 15:05:33 -- scripts/common.sh@365 -- # decimal 2 00:07:04.122 15:05:33 -- scripts/common.sh@352 -- # local d=2 00:07:04.122 15:05:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.122 15:05:33 -- scripts/common.sh@354 -- # echo 2 00:07:04.122 15:05:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.122 15:05:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.122 15:05:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.122 15:05:33 -- scripts/common.sh@367 -- # return 0 00:07:04.122 15:05:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.122 --rc genhtml_branch_coverage=1 00:07:04.122 --rc genhtml_function_coverage=1 00:07:04.122 --rc genhtml_legend=1 00:07:04.122 --rc geninfo_all_blocks=1 00:07:04.122 --rc geninfo_unexecuted_blocks=1 00:07:04.122 00:07:04.122 ' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.122 --rc genhtml_branch_coverage=1 00:07:04.122 --rc genhtml_function_coverage=1 00:07:04.122 --rc genhtml_legend=1 00:07:04.122 --rc geninfo_all_blocks=1 00:07:04.122 --rc geninfo_unexecuted_blocks=1 00:07:04.122 00:07:04.122 ' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.122 --rc genhtml_branch_coverage=1 00:07:04.122 --rc genhtml_function_coverage=1 00:07:04.122 --rc genhtml_legend=1 00:07:04.122 --rc geninfo_all_blocks=1 00:07:04.122 --rc geninfo_unexecuted_blocks=1 00:07:04.122 00:07:04.122 ' 00:07:04.122 15:05:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.122 --rc genhtml_branch_coverage=1 00:07:04.123 --rc genhtml_function_coverage=1 00:07:04.123 --rc genhtml_legend=1 00:07:04.123 --rc geninfo_all_blocks=1 00:07:04.123 --rc geninfo_unexecuted_blocks=1 00:07:04.123 00:07:04.123 ' 00:07:04.123 15:05:33 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.123 15:05:33 -- nvmf/common.sh@7 -- # uname -s 00:07:04.123 15:05:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.123 15:05:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.123 15:05:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.123 15:05:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.123 15:05:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.123 15:05:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.123 15:05:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.123 15:05:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.123 15:05:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.123 15:05:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.123 15:05:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:07:04.123 15:05:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:07:04.123 15:05:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.123 15:05:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.123 15:05:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.123 15:05:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.123 15:05:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.123 15:05:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.123 15:05:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.123 15:05:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.123 15:05:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.123 15:05:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.123 15:05:33 -- paths/export.sh@5 -- # export PATH 00:07:04.123 15:05:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.123 15:05:33 -- nvmf/common.sh@46 -- # : 0 00:07:04.123 15:05:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.123 15:05:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.123 15:05:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.123 15:05:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.123 15:05:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.123 15:05:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.123 15:05:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.123 15:05:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.123 15:05:33 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.123 15:05:33 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:04.123 15:05:33 -- target/host_management.sh@104 -- # nvmftestinit 00:07:04.123 15:05:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:04.123 15:05:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.123 15:05:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:04.123 15:05:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:04.123 15:05:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:04.123 15:05:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.123 15:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.123 15:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.123 15:05:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:04.123 15:05:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:04.123 15:05:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:04.123 15:05:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:04.123 15:05:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:04.123 15:05:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:04.123 15:05:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.123 15:05:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.123 15:05:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:04.123 15:05:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:04.123 15:05:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:04.123 15:05:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:04.123 15:05:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:04.123 15:05:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.123 15:05:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:04.123 15:05:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:04.123 15:05:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:04.123 15:05:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:04.123 15:05:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:04.382 Cannot find device "nvmf_init_br" 00:07:04.382 15:05:33 -- nvmf/common.sh@153 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:04.382 Cannot find device "nvmf_tgt_br" 00:07:04.382 15:05:33 -- nvmf/common.sh@154 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.382 Cannot find device "nvmf_tgt_br2" 00:07:04.382 15:05:33 -- nvmf/common.sh@155 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:04.382 Cannot find device "nvmf_init_br" 00:07:04.382 15:05:33 -- nvmf/common.sh@156 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:04.382 Cannot find device "nvmf_tgt_br" 00:07:04.382 15:05:33 -- nvmf/common.sh@157 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:04.382 Cannot find device "nvmf_tgt_br2" 00:07:04.382 15:05:33 -- nvmf/common.sh@158 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:04.382 Cannot find device "nvmf_br" 00:07:04.382 15:05:33 -- nvmf/common.sh@159 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:04.382 Cannot find device "nvmf_init_if" 00:07:04.382 15:05:33 -- nvmf/common.sh@160 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:04.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.382 15:05:33 -- nvmf/common.sh@161 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:04.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.382 15:05:33 -- nvmf/common.sh@162 -- # true 00:07:04.382 15:05:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:04.382 15:05:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:04.382 15:05:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:04.383 15:05:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:04.383 15:05:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:04.383 15:05:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:04.383 15:05:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:04.383 15:05:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:04.383 15:05:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:04.383 15:05:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:04.383 15:05:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:04.383 15:05:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:04.383 15:05:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:04.383 15:05:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:04.383 15:05:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:04.383 15:05:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:04.383 15:05:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:04.642 15:05:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:04.642 15:05:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:04.642 15:05:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:04.642 15:05:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:04.642 15:05:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:04.642 15:05:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:04.642 15:05:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:04.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:04.642 00:07:04.642 --- 10.0.0.2 ping statistics --- 00:07:04.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.642 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:04.642 15:05:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:04.642 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:04.642 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:04.642 00:07:04.642 --- 10.0.0.3 ping statistics --- 00:07:04.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.642 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:04.642 15:05:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:04.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:04.642 00:07:04.642 --- 10.0.0.1 ping statistics --- 00:07:04.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.642 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:04.642 15:05:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.642 15:05:33 -- nvmf/common.sh@421 -- # return 0 00:07:04.642 15:05:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:04.642 15:05:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.642 15:05:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:04.642 15:05:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:04.642 15:05:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.642 15:05:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:04.642 15:05:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:04.642 15:05:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:04.642 15:05:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.642 15:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.642 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:04.642 ************************************ 00:07:04.642 START TEST nvmf_host_management 00:07:04.642 ************************************ 00:07:04.642 15:05:33 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:07:04.642 15:05:33 -- target/host_management.sh@69 -- # starttarget 00:07:04.642 15:05:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:04.642 15:05:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:04.642 15:05:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.642 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:04.642 15:05:33 -- nvmf/common.sh@469 -- # nvmfpid=60057 00:07:04.642 15:05:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:04.642 15:05:33 -- nvmf/common.sh@470 -- # waitforlisten 60057 00:07:04.642 15:05:33 -- common/autotest_common.sh@829 -- # '[' -z 60057 ']' 00:07:04.642 15:05:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.642 15:05:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.642 15:05:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.642 15:05:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.642 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:04.642 [2024-11-06 15:05:33.862547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.642 [2024-11-06 15:05:33.863155] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.901 [2024-11-06 15:05:34.003569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.901 [2024-11-06 15:05:34.074251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.901 [2024-11-06 15:05:34.074424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.901 [2024-11-06 15:05:34.074441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.901 [2024-11-06 15:05:34.074452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.901 [2024-11-06 15:05:34.074615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.901 [2024-11-06 15:05:34.074752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.901 [2024-11-06 15:05:34.074796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.901 [2024-11-06 15:05:34.074800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.838 15:05:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.838 15:05:34 -- common/autotest_common.sh@862 -- # return 0 00:07:05.838 15:05:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:05.838 15:05:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.838 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:05.838 15:05:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.838 15:05:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:05.838 15:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.838 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:05.838 [2024-11-06 15:05:34.946207] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.838 15:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.838 15:05:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:05.838 15:05:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.838 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:05.838 15:05:34 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:05.838 15:05:34 -- target/host_management.sh@23 -- # cat 00:07:05.838 15:05:34 -- target/host_management.sh@30 -- # rpc_cmd 00:07:05.838 15:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.838 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:05.838 Malloc0 00:07:05.838 [2024-11-06 15:05:35.015993] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.838 15:05:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.838 15:05:35 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:05.838 15:05:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.838 15:05:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.838 15:05:35 -- target/host_management.sh@73 -- # perfpid=60111 00:07:05.838 15:05:35 -- target/host_management.sh@74 -- # waitforlisten 60111 /var/tmp/bdevperf.sock 00:07:05.838 15:05:35 -- common/autotest_common.sh@829 -- # '[' -z 60111 ']' 00:07:05.838 15:05:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.838 15:05:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.838 15:05:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.838 15:05:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.838 15:05:35 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:05.838 15:05:35 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:05.838 15:05:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.838 15:05:35 -- nvmf/common.sh@520 -- # config=() 00:07:05.838 15:05:35 -- nvmf/common.sh@520 -- # local subsystem config 00:07:05.838 15:05:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:05.838 15:05:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:05.838 { 00:07:05.838 "params": { 00:07:05.838 "name": "Nvme$subsystem", 00:07:05.838 "trtype": "$TEST_TRANSPORT", 00:07:05.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:05.838 "adrfam": "ipv4", 00:07:05.838 "trsvcid": "$NVMF_PORT", 00:07:05.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:05.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:05.839 "hdgst": ${hdgst:-false}, 00:07:05.839 "ddgst": ${ddgst:-false} 00:07:05.839 }, 00:07:05.839 "method": "bdev_nvme_attach_controller" 00:07:05.839 } 00:07:05.839 EOF 00:07:05.839 )") 00:07:05.839 15:05:35 -- nvmf/common.sh@542 -- # cat 00:07:05.839 15:05:35 -- nvmf/common.sh@544 -- # jq . 00:07:05.839 15:05:35 -- nvmf/common.sh@545 -- # IFS=, 00:07:05.839 15:05:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:05.839 "params": { 00:07:05.839 "name": "Nvme0", 00:07:05.839 "trtype": "tcp", 00:07:05.839 "traddr": "10.0.0.2", 00:07:05.839 "adrfam": "ipv4", 00:07:05.839 "trsvcid": "4420", 00:07:05.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:05.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:05.839 "hdgst": false, 00:07:05.839 "ddgst": false 00:07:05.839 }, 00:07:05.839 "method": "bdev_nvme_attach_controller" 00:07:05.839 }' 00:07:06.098 [2024-11-06 15:05:35.122620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.098 [2024-11-06 15:05:35.122740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:07:06.098 [2024-11-06 15:05:35.259734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.098 [2024-11-06 15:05:35.326625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.357 Running I/O for 10 seconds... 00:07:06.925 15:05:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.925 15:05:36 -- common/autotest_common.sh@862 -- # return 0 00:07:06.925 15:05:36 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:06.925 15:05:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.925 15:05:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.925 15:05:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.925 15:05:36 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.925 15:05:36 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:06.925 15:05:36 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:06.925 15:05:36 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:06.925 15:05:36 -- target/host_management.sh@52 -- # local ret=1 00:07:06.925 15:05:36 -- target/host_management.sh@53 -- # local i 00:07:06.925 15:05:36 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:06.925 15:05:36 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:06.925 15:05:36 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:06.925 15:05:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.925 15:05:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.925 15:05:36 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:06.925 15:05:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.186 15:05:36 -- target/host_management.sh@55 -- # read_io_count=2088 00:07:07.186 15:05:36 -- target/host_management.sh@58 -- # '[' 2088 -ge 100 ']' 00:07:07.186 15:05:36 -- target/host_management.sh@59 -- # ret=0 00:07:07.186 15:05:36 -- target/host_management.sh@60 -- # break 00:07:07.186 15:05:36 -- target/host_management.sh@64 -- # return 0 00:07:07.186 15:05:36 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:07.186 15:05:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.186 15:05:36 -- common/autotest_common.sh@10 -- # set +x 00:07:07.186 [2024-11-06 15:05:36.217131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcad00 is same with the state(5) to be set 00:07:07.186 [2024-11-06 15:05:36.217599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.186 [2024-11-06 15:05:36.217630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.186 [2024-11-06 15:05:36.217651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.186 [2024-11-06 15:05:36.217704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.186 [2024-11-06 15:05:36.217718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.217982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.217995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.187 [2024-11-06 15:05:36.218545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.187 [2024-11-06 15:05:36.218553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.218986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.218995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.219007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.219016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.219027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.188 [2024-11-06 15:05:36.219051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:07.188 [2024-11-06 15:05:36.219142] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x604400 was disconnected and freed. reset controller. 00:07:07.188 [2024-11-06 15:05:36.220288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:07.188 15:05:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.188 task offset: 24192 on job bdev=Nvme0n1 fails 00:07:07.188 00:07:07.188 Latency(us) 00:07:07.188 [2024-11-06T15:05:36.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.188 [2024-11-06T15:05:36.463Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:07.188 [2024-11-06T15:05:36.463Z] Job: Nvme0n1 ended in about 0.76 seconds with error 00:07:07.188 Verification LBA range: start 0x0 length 0x400 00:07:07.188 Nvme0n1 : 0.76 2912.78 182.05 84.09 0.00 21043.41 6166.34 27405.96 00:07:07.188 [2024-11-06T15:05:36.463Z] =================================================================================================================== 00:07:07.188 [2024-11-06T15:05:36.463Z] Total : 2912.78 182.05 84.09 0.00 21043.41 6166.34 27405.96 00:07:07.188 15:05:36 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:07.188 15:05:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.188 15:05:36 -- common/autotest_common.sh@10 -- # set +x 00:07:07.188 [2024-11-06 15:05:36.222335] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.188 [2024-11-06 15:05:36.222360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a150 (9): Bad file descriptor 00:07:07.188 [2024-11-06 15:05:36.228392] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.188 15:05:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.188 15:05:36 -- target/host_management.sh@87 -- # sleep 1 00:07:08.125 15:05:37 -- target/host_management.sh@91 -- # kill -9 60111 00:07:08.125 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60111) - No such process 00:07:08.125 15:05:37 -- target/host_management.sh@91 -- # true 00:07:08.125 15:05:37 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:08.125 15:05:37 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:08.125 15:05:37 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:08.125 15:05:37 -- nvmf/common.sh@520 -- # config=() 00:07:08.125 15:05:37 -- nvmf/common.sh@520 -- # local subsystem config 00:07:08.125 15:05:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:08.125 15:05:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:08.125 { 00:07:08.125 "params": { 00:07:08.125 "name": "Nvme$subsystem", 00:07:08.125 "trtype": "$TEST_TRANSPORT", 00:07:08.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:08.125 "adrfam": "ipv4", 00:07:08.125 "trsvcid": "$NVMF_PORT", 00:07:08.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:08.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:08.125 "hdgst": ${hdgst:-false}, 00:07:08.125 "ddgst": ${ddgst:-false} 00:07:08.125 }, 00:07:08.125 "method": "bdev_nvme_attach_controller" 00:07:08.125 } 00:07:08.125 EOF 00:07:08.125 )") 00:07:08.125 15:05:37 -- nvmf/common.sh@542 -- # cat 00:07:08.125 15:05:37 -- nvmf/common.sh@544 -- # jq . 00:07:08.125 15:05:37 -- nvmf/common.sh@545 -- # IFS=, 00:07:08.125 15:05:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:08.125 "params": { 00:07:08.125 "name": "Nvme0", 00:07:08.125 "trtype": "tcp", 00:07:08.125 "traddr": "10.0.0.2", 00:07:08.125 "adrfam": "ipv4", 00:07:08.125 "trsvcid": "4420", 00:07:08.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:08.125 "hdgst": false, 00:07:08.125 "ddgst": false 00:07:08.125 }, 00:07:08.125 "method": "bdev_nvme_attach_controller" 00:07:08.125 }' 00:07:08.125 [2024-11-06 15:05:37.290754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.126 [2024-11-06 15:05:37.290850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60150 ] 00:07:08.384 [2024-11-06 15:05:37.429467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.384 [2024-11-06 15:05:37.483275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.384 Running I/O for 1 seconds... 00:07:09.826 00:07:09.826 Latency(us) 00:07:09.826 [2024-11-06T15:05:39.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.826 [2024-11-06T15:05:39.101Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:09.826 Verification LBA range: start 0x0 length 0x400 00:07:09.826 Nvme0n1 : 1.01 2885.17 180.32 0.00 0.00 21847.92 2070.34 28478.37 00:07:09.826 [2024-11-06T15:05:39.101Z] =================================================================================================================== 00:07:09.826 [2024-11-06T15:05:39.101Z] Total : 2885.17 180.32 0.00 0.00 21847.92 2070.34 28478.37 00:07:09.826 15:05:38 -- target/host_management.sh@101 -- # stoptarget 00:07:09.826 15:05:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:09.826 15:05:38 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:09.826 15:05:38 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:09.826 15:05:38 -- target/host_management.sh@40 -- # nvmftestfini 00:07:09.826 15:05:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:09.826 15:05:38 -- nvmf/common.sh@116 -- # sync 00:07:09.826 15:05:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:09.826 15:05:38 -- nvmf/common.sh@119 -- # set +e 00:07:09.826 15:05:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:09.826 15:05:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:09.826 rmmod nvme_tcp 00:07:09.826 rmmod nvme_fabrics 00:07:09.826 rmmod nvme_keyring 00:07:09.826 15:05:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:09.826 15:05:38 -- nvmf/common.sh@123 -- # set -e 00:07:09.826 15:05:38 -- nvmf/common.sh@124 -- # return 0 00:07:09.826 15:05:38 -- nvmf/common.sh@477 -- # '[' -n 60057 ']' 00:07:09.826 15:05:38 -- nvmf/common.sh@478 -- # killprocess 60057 00:07:09.826 15:05:38 -- common/autotest_common.sh@936 -- # '[' -z 60057 ']' 00:07:09.826 15:05:38 -- common/autotest_common.sh@940 -- # kill -0 60057 00:07:09.826 15:05:38 -- common/autotest_common.sh@941 -- # uname 00:07:09.826 15:05:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.826 15:05:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60057 00:07:09.826 15:05:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:09.826 killing process with pid 60057 00:07:09.826 15:05:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:09.826 15:05:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60057' 00:07:09.826 15:05:39 -- common/autotest_common.sh@955 -- # kill 60057 00:07:09.826 15:05:39 -- common/autotest_common.sh@960 -- # wait 60057 00:07:10.095 [2024-11-06 15:05:39.166108] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:10.095 15:05:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:10.095 15:05:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:10.095 15:05:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:10.095 15:05:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.095 15:05:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:10.095 15:05:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.095 15:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.095 15:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.095 15:05:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:10.095 00:07:10.095 real 0m5.432s 00:07:10.095 user 0m23.013s 00:07:10.095 sys 0m1.191s 00:07:10.095 15:05:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.095 15:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.095 ************************************ 00:07:10.095 END TEST nvmf_host_management 00:07:10.095 ************************************ 00:07:10.095 15:05:39 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:10.095 ************************************ 00:07:10.095 END TEST nvmf_host_management 00:07:10.095 ************************************ 00:07:10.095 00:07:10.095 real 0m6.096s 00:07:10.095 user 0m23.240s 00:07:10.095 sys 0m1.433s 00:07:10.096 15:05:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.096 15:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.096 15:05:39 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:10.096 15:05:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.096 15:05:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.096 15:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.096 ************************************ 00:07:10.096 START TEST nvmf_lvol 00:07:10.096 ************************************ 00:07:10.096 15:05:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:10.355 * Looking for test storage... 00:07:10.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:10.355 15:05:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:10.355 15:05:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:10.355 15:05:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.355 15:05:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.355 15:05:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.355 15:05:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.355 15:05:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.355 15:05:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.355 15:05:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.355 15:05:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.355 15:05:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.355 15:05:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.355 15:05:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.355 15:05:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.356 15:05:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.356 15:05:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.356 15:05:39 -- scripts/common.sh@344 -- # : 1 00:07:10.356 15:05:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.356 15:05:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.356 15:05:39 -- scripts/common.sh@364 -- # decimal 1 00:07:10.356 15:05:39 -- scripts/common.sh@352 -- # local d=1 00:07:10.356 15:05:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.356 15:05:39 -- scripts/common.sh@354 -- # echo 1 00:07:10.356 15:05:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:10.356 15:05:39 -- scripts/common.sh@365 -- # decimal 2 00:07:10.356 15:05:39 -- scripts/common.sh@352 -- # local d=2 00:07:10.356 15:05:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.356 15:05:39 -- scripts/common.sh@354 -- # echo 2 00:07:10.356 15:05:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:10.356 15:05:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:10.356 15:05:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:10.356 15:05:39 -- scripts/common.sh@367 -- # return 0 00:07:10.356 15:05:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.356 15:05:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 00:07:10.356 ' 00:07:10.356 15:05:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 00:07:10.356 ' 00:07:10.356 15:05:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 00:07:10.356 ' 00:07:10.356 15:05:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 00:07:10.356 ' 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.356 15:05:39 -- nvmf/common.sh@7 -- # uname -s 00:07:10.356 15:05:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.356 15:05:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.356 15:05:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.356 15:05:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.356 15:05:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.356 15:05:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.356 15:05:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.356 15:05:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.356 15:05:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.356 15:05:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.356 15:05:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:07:10.356 15:05:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:07:10.356 15:05:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.356 15:05:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.356 15:05:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:10.356 15:05:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.356 15:05:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.356 15:05:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.356 15:05:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.356 15:05:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.356 15:05:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.356 15:05:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.356 15:05:39 -- paths/export.sh@5 -- # export PATH 00:07:10.356 15:05:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.356 15:05:39 -- nvmf/common.sh@46 -- # : 0 00:07:10.356 15:05:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:10.356 15:05:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:10.356 15:05:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:10.356 15:05:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.356 15:05:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.356 15:05:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:10.356 15:05:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:10.356 15:05:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.356 15:05:39 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:10.356 15:05:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:10.356 15:05:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.356 15:05:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:10.356 15:05:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:10.356 15:05:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:10.356 15:05:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.356 15:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.356 15:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.356 15:05:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:10.356 15:05:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:10.356 15:05:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:10.356 15:05:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:10.356 15:05:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:10.356 15:05:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:10.356 15:05:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.356 15:05:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.356 15:05:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:10.356 15:05:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:10.356 15:05:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:10.356 15:05:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:10.356 15:05:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:10.356 15:05:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.356 15:05:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:10.356 15:05:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:10.356 15:05:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:10.356 15:05:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:10.356 15:05:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:10.356 15:05:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:10.356 Cannot find device "nvmf_tgt_br" 00:07:10.356 15:05:39 -- nvmf/common.sh@154 -- # true 00:07:10.356 15:05:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.356 Cannot find device "nvmf_tgt_br2" 00:07:10.356 15:05:39 -- nvmf/common.sh@155 -- # true 00:07:10.356 15:05:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:10.356 15:05:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:10.356 Cannot find device "nvmf_tgt_br" 00:07:10.356 15:05:39 -- nvmf/common.sh@157 -- # true 00:07:10.356 15:05:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:10.356 Cannot find device "nvmf_tgt_br2" 00:07:10.356 15:05:39 -- nvmf/common.sh@158 -- # true 00:07:10.356 15:05:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:10.356 15:05:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:10.615 15:05:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.615 15:05:39 -- nvmf/common.sh@161 -- # true 00:07:10.615 15:05:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.615 15:05:39 -- nvmf/common.sh@162 -- # true 00:07:10.615 15:05:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:10.615 15:05:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:10.615 15:05:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:10.615 15:05:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:10.615 15:05:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:10.615 15:05:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:10.615 15:05:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:10.615 15:05:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:10.615 15:05:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:10.615 15:05:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:10.615 15:05:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:10.615 15:05:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:10.615 15:05:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:10.615 15:05:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:10.615 15:05:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:10.615 15:05:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:10.615 15:05:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:10.615 15:05:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:10.615 15:05:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:10.615 15:05:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:10.615 15:05:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:10.615 15:05:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:10.615 15:05:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:10.615 15:05:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:10.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:07:10.615 00:07:10.615 --- 10.0.0.2 ping statistics --- 00:07:10.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.615 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:10.615 15:05:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:10.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:10.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:10.615 00:07:10.615 --- 10.0.0.3 ping statistics --- 00:07:10.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.615 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:10.615 15:05:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:10.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:10.615 00:07:10.615 --- 10.0.0.1 ping statistics --- 00:07:10.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.615 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:10.615 15:05:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.615 15:05:39 -- nvmf/common.sh@421 -- # return 0 00:07:10.615 15:05:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:10.616 15:05:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.616 15:05:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:10.616 15:05:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:10.616 15:05:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.616 15:05:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:10.616 15:05:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:10.616 15:05:39 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:10.616 15:05:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:10.616 15:05:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.616 15:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.616 15:05:39 -- nvmf/common.sh@469 -- # nvmfpid=60383 00:07:10.616 15:05:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:10.616 15:05:39 -- nvmf/common.sh@470 -- # waitforlisten 60383 00:07:10.616 15:05:39 -- common/autotest_common.sh@829 -- # '[' -z 60383 ']' 00:07:10.616 15:05:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.616 15:05:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.616 15:05:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.616 15:05:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.616 15:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.874 [2024-11-06 15:05:39.901499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.874 [2024-11-06 15:05:39.901612] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.874 [2024-11-06 15:05:40.044099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.874 [2024-11-06 15:05:40.113363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.874 [2024-11-06 15:05:40.113569] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.874 [2024-11-06 15:05:40.113585] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.874 [2024-11-06 15:05:40.113596] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.874 [2024-11-06 15:05:40.114090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.874 [2024-11-06 15:05:40.114279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.874 [2024-11-06 15:05:40.114288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.811 15:05:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.811 15:05:40 -- common/autotest_common.sh@862 -- # return 0 00:07:11.811 15:05:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:11.811 15:05:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:11.811 15:05:40 -- common/autotest_common.sh@10 -- # set +x 00:07:11.811 15:05:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.812 15:05:40 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.071 [2024-11-06 15:05:41.239683] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.071 15:05:41 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:12.329 15:05:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:12.329 15:05:41 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:12.588 15:05:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:12.588 15:05:41 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:12.848 15:05:42 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:13.107 15:05:42 -- target/nvmf_lvol.sh@29 -- # lvs=320307a7-e0da-4c01-8165-1c7410cee875 00:07:13.107 15:05:42 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 320307a7-e0da-4c01-8165-1c7410cee875 lvol 20 00:07:13.366 15:05:42 -- target/nvmf_lvol.sh@32 -- # lvol=2b6be4b7-cb44-48e2-8010-cb98b11551de 00:07:13.366 15:05:42 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.624 15:05:42 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2b6be4b7-cb44-48e2-8010-cb98b11551de 00:07:13.884 15:05:43 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:14.142 [2024-11-06 15:05:43.301632] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.142 15:05:43 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.400 15:05:43 -- target/nvmf_lvol.sh@42 -- # perf_pid=60458 00:07:14.400 15:05:43 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:14.400 15:05:43 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:15.336 15:05:44 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2b6be4b7-cb44-48e2-8010-cb98b11551de MY_SNAPSHOT 00:07:15.595 15:05:44 -- target/nvmf_lvol.sh@47 -- # snapshot=b2964076-a89f-4382-be45-77eecae6354c 00:07:15.595 15:05:44 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2b6be4b7-cb44-48e2-8010-cb98b11551de 30 00:07:16.162 15:05:45 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b2964076-a89f-4382-be45-77eecae6354c MY_CLONE 00:07:16.162 15:05:45 -- target/nvmf_lvol.sh@49 -- # clone=7f3ea4b7-f933-438c-830c-bee1d13bb1b2 00:07:16.162 15:05:45 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7f3ea4b7-f933-438c-830c-bee1d13bb1b2 00:07:16.730 15:05:45 -- target/nvmf_lvol.sh@53 -- # wait 60458 00:07:24.849 Initializing NVMe Controllers 00:07:24.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.849 Controller IO queue size 128, less than required. 00:07:24.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:24.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:24.849 Initialization complete. Launching workers. 00:07:24.849 ======================================================== 00:07:24.849 Latency(us) 00:07:24.849 Device Information : IOPS MiB/s Average min max 00:07:24.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10023.30 39.15 12777.84 1572.81 67550.76 00:07:24.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10047.20 39.25 12747.44 281.24 90568.09 00:07:24.849 ======================================================== 00:07:24.849 Total : 20070.50 78.40 12762.62 281.24 90568.09 00:07:24.849 00:07:24.849 15:05:53 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.849 15:05:54 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2b6be4b7-cb44-48e2-8010-cb98b11551de 00:07:25.108 15:05:54 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 320307a7-e0da-4c01-8165-1c7410cee875 00:07:25.367 15:05:54 -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:25.367 15:05:54 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:25.367 15:05:54 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:25.367 15:05:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:25.367 15:05:54 -- nvmf/common.sh@116 -- # sync 00:07:25.367 15:05:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:25.367 15:05:54 -- nvmf/common.sh@119 -- # set +e 00:07:25.367 15:05:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:25.367 15:05:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:25.367 rmmod nvme_tcp 00:07:25.626 rmmod nvme_fabrics 00:07:25.627 rmmod nvme_keyring 00:07:25.627 15:05:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:25.627 15:05:54 -- nvmf/common.sh@123 -- # set -e 00:07:25.627 15:05:54 -- nvmf/common.sh@124 -- # return 0 00:07:25.627 15:05:54 -- nvmf/common.sh@477 -- # '[' -n 60383 ']' 00:07:25.627 15:05:54 -- nvmf/common.sh@478 -- # killprocess 60383 00:07:25.627 15:05:54 -- common/autotest_common.sh@936 -- # '[' -z 60383 ']' 00:07:25.627 15:05:54 -- common/autotest_common.sh@940 -- # kill -0 60383 00:07:25.627 15:05:54 -- common/autotest_common.sh@941 -- # uname 00:07:25.627 15:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.627 15:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60383 00:07:25.627 killing process with pid 60383 00:07:25.627 15:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.627 15:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.627 15:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60383' 00:07:25.627 15:05:54 -- common/autotest_common.sh@955 -- # kill 60383 00:07:25.627 15:05:54 -- common/autotest_common.sh@960 -- # wait 60383 00:07:25.886 15:05:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:25.886 15:05:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:25.886 15:05:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:25.886 15:05:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.886 15:05:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:25.886 15:05:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.886 15:05:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.886 15:05:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.886 15:05:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:25.886 ************************************ 00:07:25.886 END TEST nvmf_lvol 00:07:25.886 ************************************ 00:07:25.886 00:07:25.886 real 0m15.674s 00:07:25.886 user 1m4.923s 00:07:25.886 sys 0m4.488s 00:07:25.886 15:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.886 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.886 15:05:55 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:25.886 15:05:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.886 15:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.886 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.886 ************************************ 00:07:25.886 START TEST nvmf_lvs_grow 00:07:25.886 ************************************ 00:07:25.886 15:05:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:25.886 * Looking for test storage... 00:07:25.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:25.886 15:05:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.886 15:05:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.886 15:05:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.145 15:05:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.145 15:05:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.145 15:05:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.145 15:05:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.145 15:05:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.145 15:05:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.145 15:05:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.145 15:05:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.145 15:05:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.145 15:05:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.145 15:05:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.145 15:05:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.145 15:05:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.145 15:05:55 -- scripts/common.sh@344 -- # : 1 00:07:26.145 15:05:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.145 15:05:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.145 15:05:55 -- scripts/common.sh@364 -- # decimal 1 00:07:26.145 15:05:55 -- scripts/common.sh@352 -- # local d=1 00:07:26.145 15:05:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.145 15:05:55 -- scripts/common.sh@354 -- # echo 1 00:07:26.145 15:05:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.145 15:05:55 -- scripts/common.sh@365 -- # decimal 2 00:07:26.145 15:05:55 -- scripts/common.sh@352 -- # local d=2 00:07:26.145 15:05:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.145 15:05:55 -- scripts/common.sh@354 -- # echo 2 00:07:26.145 15:05:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.145 15:05:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.145 15:05:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.145 15:05:55 -- scripts/common.sh@367 -- # return 0 00:07:26.145 15:05:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.145 15:05:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.145 --rc genhtml_branch_coverage=1 00:07:26.145 --rc genhtml_function_coverage=1 00:07:26.145 --rc genhtml_legend=1 00:07:26.145 --rc geninfo_all_blocks=1 00:07:26.145 --rc geninfo_unexecuted_blocks=1 00:07:26.145 00:07:26.145 ' 00:07:26.145 15:05:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.145 --rc genhtml_branch_coverage=1 00:07:26.145 --rc genhtml_function_coverage=1 00:07:26.145 --rc genhtml_legend=1 00:07:26.145 --rc geninfo_all_blocks=1 00:07:26.145 --rc geninfo_unexecuted_blocks=1 00:07:26.145 00:07:26.145 ' 00:07:26.145 15:05:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.145 --rc genhtml_branch_coverage=1 00:07:26.145 --rc genhtml_function_coverage=1 00:07:26.145 --rc genhtml_legend=1 00:07:26.145 --rc geninfo_all_blocks=1 00:07:26.145 --rc geninfo_unexecuted_blocks=1 00:07:26.145 00:07:26.145 ' 00:07:26.145 15:05:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.145 --rc genhtml_branch_coverage=1 00:07:26.145 --rc genhtml_function_coverage=1 00:07:26.145 --rc genhtml_legend=1 00:07:26.145 --rc geninfo_all_blocks=1 00:07:26.145 --rc geninfo_unexecuted_blocks=1 00:07:26.145 00:07:26.145 ' 00:07:26.145 15:05:55 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.145 15:05:55 -- nvmf/common.sh@7 -- # uname -s 00:07:26.145 15:05:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.145 15:05:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.145 15:05:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.145 15:05:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.145 15:05:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.145 15:05:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.145 15:05:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.145 15:05:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.145 15:05:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.145 15:05:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.145 15:05:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:07:26.145 15:05:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:07:26.145 15:05:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.145 15:05:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.145 15:05:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.145 15:05:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.145 15:05:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.145 15:05:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.145 15:05:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.145 15:05:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.145 15:05:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.146 15:05:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.146 15:05:55 -- paths/export.sh@5 -- # export PATH 00:07:26.146 15:05:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.146 15:05:55 -- nvmf/common.sh@46 -- # : 0 00:07:26.146 15:05:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.146 15:05:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.146 15:05:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.146 15:05:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.146 15:05:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.146 15:05:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.146 15:05:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.146 15:05:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.146 15:05:55 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.146 15:05:55 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:26.146 15:05:55 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:07:26.146 15:05:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:26.146 15:05:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.146 15:05:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:26.146 15:05:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:26.146 15:05:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:26.146 15:05:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.146 15:05:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.146 15:05:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.146 15:05:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:26.146 15:05:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:26.146 15:05:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:26.146 15:05:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:26.146 15:05:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:26.146 15:05:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:26.146 15:05:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.146 15:05:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.146 15:05:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.146 15:05:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:26.146 15:05:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.146 15:05:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.146 15:05:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.146 15:05:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.146 15:05:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.146 15:05:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.146 15:05:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.146 15:05:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.146 15:05:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:26.146 15:05:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:26.146 Cannot find device "nvmf_tgt_br" 00:07:26.146 15:05:55 -- nvmf/common.sh@154 -- # true 00:07:26.146 15:05:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.146 Cannot find device "nvmf_tgt_br2" 00:07:26.146 15:05:55 -- nvmf/common.sh@155 -- # true 00:07:26.146 15:05:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:26.146 15:05:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:26.146 Cannot find device "nvmf_tgt_br" 00:07:26.146 15:05:55 -- nvmf/common.sh@157 -- # true 00:07:26.146 15:05:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:26.146 Cannot find device "nvmf_tgt_br2" 00:07:26.146 15:05:55 -- nvmf/common.sh@158 -- # true 00:07:26.146 15:05:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:26.146 15:05:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:26.146 15:05:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.146 15:05:55 -- nvmf/common.sh@161 -- # true 00:07:26.146 15:05:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.146 15:05:55 -- nvmf/common.sh@162 -- # true 00:07:26.146 15:05:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.146 15:05:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.146 15:05:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.146 15:05:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.405 15:05:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.405 15:05:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.405 15:05:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.405 15:05:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.405 15:05:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.405 15:05:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:26.405 15:05:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:26.405 15:05:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:26.405 15:05:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:26.405 15:05:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.405 15:05:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.405 15:05:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.405 15:05:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:26.405 15:05:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:26.405 15:05:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.405 15:05:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.405 15:05:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.405 15:05:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.405 15:05:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.405 15:05:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:26.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:26.405 00:07:26.405 --- 10.0.0.2 ping statistics --- 00:07:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.405 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:26.405 15:05:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:26.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:07:26.405 00:07:26.405 --- 10.0.0.3 ping statistics --- 00:07:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.405 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:26.405 15:05:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:26.405 00:07:26.405 --- 10.0.0.1 ping statistics --- 00:07:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.405 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:26.405 15:05:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.405 15:05:55 -- nvmf/common.sh@421 -- # return 0 00:07:26.405 15:05:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:26.405 15:05:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.405 15:05:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:26.405 15:05:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:26.405 15:05:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.405 15:05:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:26.405 15:05:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:26.405 15:05:55 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:07:26.405 15:05:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:26.405 15:05:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.406 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.406 15:05:55 -- nvmf/common.sh@469 -- # nvmfpid=60793 00:07:26.406 15:05:55 -- nvmf/common.sh@470 -- # waitforlisten 60793 00:07:26.406 15:05:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:26.406 15:05:55 -- common/autotest_common.sh@829 -- # '[' -z 60793 ']' 00:07:26.406 15:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.406 15:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.406 15:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.406 15:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.406 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.406 [2024-11-06 15:05:55.652203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.406 [2024-11-06 15:05:55.652291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.665 [2024-11-06 15:05:55.791968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.665 [2024-11-06 15:05:55.843152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.665 [2024-11-06 15:05:55.843441] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.665 [2024-11-06 15:05:55.843826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.665 [2024-11-06 15:05:55.843945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.665 [2024-11-06 15:05:55.844047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.601 15:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.601 15:05:56 -- common/autotest_common.sh@862 -- # return 0 00:07:27.601 15:05:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:27.601 15:05:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.601 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:27.601 15:05:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.601 15:05:56 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.858 [2024-11-06 15:05:56.966102] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.858 15:05:56 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:07:27.858 15:05:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.858 15:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.858 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:27.858 ************************************ 00:07:27.859 START TEST lvs_grow_clean 00:07:27.859 ************************************ 00:07:27.859 15:05:56 -- common/autotest_common.sh@1114 -- # lvs_grow 00:07:27.859 15:05:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:27.859 15:05:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:27.859 15:05:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:27.859 15:05:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:27.859 15:05:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:27.859 15:05:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:27.859 15:05:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:27.859 15:05:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:27.859 15:05:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.116 15:05:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:28.117 15:05:57 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:28.684 15:05:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:28.684 15:05:57 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:28.684 15:05:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:28.684 15:05:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:28.684 15:05:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:28.684 15:05:57 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 lvol 150 00:07:28.942 15:05:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=69cd4bea-f97d-4801-bb19-87d6fbd84f14 00:07:28.942 15:05:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:28.942 15:05:58 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:29.200 [2024-11-06 15:05:58.414401] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:29.200 [2024-11-06 15:05:58.414486] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:29.200 true 00:07:29.200 15:05:58 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:29.200 15:05:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:29.459 15:05:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:29.459 15:05:58 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.717 15:05:58 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69cd4bea-f97d-4801-bb19-87d6fbd84f14 00:07:29.975 15:05:59 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.233 [2024-11-06 15:05:59.388549] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.233 15:05:59 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.491 15:05:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60881 00:07:30.491 15:05:59 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:30.491 15:05:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.491 15:05:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60881 /var/tmp/bdevperf.sock 00:07:30.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.491 15:05:59 -- common/autotest_common.sh@829 -- # '[' -z 60881 ']' 00:07:30.491 15:05:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.491 15:05:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.491 15:05:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.491 15:05:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.491 15:05:59 -- common/autotest_common.sh@10 -- # set +x 00:07:30.491 [2024-11-06 15:05:59.652308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.491 [2024-11-06 15:05:59.652604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:07:30.748 [2024-11-06 15:05:59.791162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.748 [2024-11-06 15:05:59.860774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.314 15:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.314 15:06:00 -- common/autotest_common.sh@862 -- # return 0 00:07:31.314 15:06:00 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:31.573 Nvme0n1 00:07:31.573 15:06:00 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:31.832 [ 00:07:31.832 { 00:07:31.832 "name": "Nvme0n1", 00:07:31.832 "aliases": [ 00:07:31.832 "69cd4bea-f97d-4801-bb19-87d6fbd84f14" 00:07:31.832 ], 00:07:31.832 "product_name": "NVMe disk", 00:07:31.832 "block_size": 4096, 00:07:31.832 "num_blocks": 38912, 00:07:31.832 "uuid": "69cd4bea-f97d-4801-bb19-87d6fbd84f14", 00:07:31.832 "assigned_rate_limits": { 00:07:31.832 "rw_ios_per_sec": 0, 00:07:31.832 "rw_mbytes_per_sec": 0, 00:07:31.832 "r_mbytes_per_sec": 0, 00:07:31.832 "w_mbytes_per_sec": 0 00:07:31.832 }, 00:07:31.832 "claimed": false, 00:07:31.832 "zoned": false, 00:07:31.832 "supported_io_types": { 00:07:31.832 "read": true, 00:07:31.832 "write": true, 00:07:31.832 "unmap": true, 00:07:31.832 "write_zeroes": true, 00:07:31.832 "flush": true, 00:07:31.832 "reset": true, 00:07:31.832 "compare": true, 00:07:31.832 "compare_and_write": true, 00:07:31.832 "abort": true, 00:07:31.832 "nvme_admin": true, 00:07:31.832 "nvme_io": true 00:07:31.832 }, 00:07:31.832 "driver_specific": { 00:07:31.832 "nvme": [ 00:07:31.832 { 00:07:31.832 "trid": { 00:07:31.832 "trtype": "TCP", 00:07:31.832 "adrfam": "IPv4", 00:07:31.832 "traddr": "10.0.0.2", 00:07:31.832 "trsvcid": "4420", 00:07:31.832 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:31.832 }, 00:07:31.832 "ctrlr_data": { 00:07:31.832 "cntlid": 1, 00:07:31.832 "vendor_id": "0x8086", 00:07:31.832 "model_number": "SPDK bdev Controller", 00:07:31.832 "serial_number": "SPDK0", 00:07:31.832 "firmware_revision": "24.01.1", 00:07:31.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.832 "oacs": { 00:07:31.832 "security": 0, 00:07:31.832 "format": 0, 00:07:31.832 "firmware": 0, 00:07:31.832 "ns_manage": 0 00:07:31.832 }, 00:07:31.832 "multi_ctrlr": true, 00:07:31.832 "ana_reporting": false 00:07:31.832 }, 00:07:31.832 "vs": { 00:07:31.832 "nvme_version": "1.3" 00:07:31.832 }, 00:07:31.832 "ns_data": { 00:07:31.832 "id": 1, 00:07:31.832 "can_share": true 00:07:31.832 } 00:07:31.832 } 00:07:31.832 ], 00:07:31.832 "mp_policy": "active_passive" 00:07:31.832 } 00:07:31.832 } 00:07:31.832 ] 00:07:31.832 15:06:01 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:31.832 15:06:01 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60905 00:07:31.832 15:06:01 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:32.091 Running I/O for 10 seconds... 00:07:33.027 Latency(us) 00:07:33.027 [2024-11-06T15:06:02.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.027 [2024-11-06T15:06:02.302Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.027 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:33.027 [2024-11-06T15:06:02.302Z] =================================================================================================================== 00:07:33.027 [2024-11-06T15:06:02.302Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:33.027 00:07:33.963 15:06:03 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:33.963 [2024-11-06T15:06:03.238Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.963 Nvme0n1 : 2.00 6424.00 25.09 0.00 0.00 0.00 0.00 0.00 00:07:33.963 [2024-11-06T15:06:03.238Z] =================================================================================================================== 00:07:33.963 [2024-11-06T15:06:03.238Z] Total : 6424.00 25.09 0.00 0.00 0.00 0.00 0.00 00:07:33.963 00:07:34.222 true 00:07:34.222 15:06:03 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:34.222 15:06:03 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:34.480 15:06:03 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:34.480 15:06:03 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:34.480 15:06:03 -- target/nvmf_lvs_grow.sh@65 -- # wait 60905 00:07:35.053 [2024-11-06T15:06:04.328Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.053 Nvme0n1 : 3.00 6484.00 25.33 0.00 0.00 0.00 0.00 0.00 00:07:35.053 [2024-11-06T15:06:04.328Z] =================================================================================================================== 00:07:35.053 [2024-11-06T15:06:04.328Z] Total : 6484.00 25.33 0.00 0.00 0.00 0.00 0.00 00:07:35.053 00:07:35.991 [2024-11-06T15:06:05.266Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.991 Nvme0n1 : 4.00 6482.25 25.32 0.00 0.00 0.00 0.00 0.00 00:07:35.991 [2024-11-06T15:06:05.266Z] =================================================================================================================== 00:07:35.991 [2024-11-06T15:06:05.266Z] Total : 6482.25 25.32 0.00 0.00 0.00 0.00 0.00 00:07:35.991 00:07:37.368 [2024-11-06T15:06:06.643Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.368 Nvme0n1 : 5.00 6400.60 25.00 0.00 0.00 0.00 0.00 0.00 00:07:37.368 [2024-11-06T15:06:06.643Z] =================================================================================================================== 00:07:37.368 [2024-11-06T15:06:06.643Z] Total : 6400.60 25.00 0.00 0.00 0.00 0.00 0.00 00:07:37.368 00:07:38.305 [2024-11-06T15:06:07.580Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.305 Nvme0n1 : 6.00 6413.33 25.05 0.00 0.00 0.00 0.00 0.00 00:07:38.305 [2024-11-06T15:06:07.580Z] =================================================================================================================== 00:07:38.305 [2024-11-06T15:06:07.580Z] Total : 6413.33 25.05 0.00 0.00 0.00 0.00 0.00 00:07:38.305 00:07:39.242 [2024-11-06T15:06:08.517Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.242 Nvme0n1 : 7.00 6422.43 25.09 0.00 0.00 0.00 0.00 0.00 00:07:39.242 [2024-11-06T15:06:08.517Z] =================================================================================================================== 00:07:39.242 [2024-11-06T15:06:08.517Z] Total : 6422.43 25.09 0.00 0.00 0.00 0.00 0.00 00:07:39.242 00:07:40.178 [2024-11-06T15:06:09.453Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.178 Nvme0n1 : 8.00 6413.38 25.05 0.00 0.00 0.00 0.00 0.00 00:07:40.178 [2024-11-06T15:06:09.453Z] =================================================================================================================== 00:07:40.178 [2024-11-06T15:06:09.453Z] Total : 6413.38 25.05 0.00 0.00 0.00 0.00 0.00 00:07:40.178 00:07:41.114 [2024-11-06T15:06:10.389Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.114 Nvme0n1 : 9.00 6406.33 25.02 0.00 0.00 0.00 0.00 0.00 00:07:41.114 [2024-11-06T15:06:10.389Z] =================================================================================================================== 00:07:41.114 [2024-11-06T15:06:10.389Z] Total : 6406.33 25.02 0.00 0.00 0.00 0.00 0.00 00:07:41.114 00:07:42.051 [2024-11-06T15:06:11.326Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.051 Nvme0n1 : 10.00 6388.00 24.95 0.00 0.00 0.00 0.00 0.00 00:07:42.051 [2024-11-06T15:06:11.326Z] =================================================================================================================== 00:07:42.051 [2024-11-06T15:06:11.326Z] Total : 6388.00 24.95 0.00 0.00 0.00 0.00 0.00 00:07:42.051 00:07:42.051 00:07:42.051 Latency(us) 00:07:42.051 [2024-11-06T15:06:11.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.051 [2024-11-06T15:06:11.326Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.051 Nvme0n1 : 10.00 6397.90 24.99 0.00 0.00 20001.45 3604.48 78166.57 00:07:42.051 [2024-11-06T15:06:11.326Z] =================================================================================================================== 00:07:42.051 [2024-11-06T15:06:11.326Z] Total : 6397.90 24.99 0.00 0.00 20001.45 3604.48 78166.57 00:07:42.051 0 00:07:42.051 15:06:11 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60881 00:07:42.051 15:06:11 -- common/autotest_common.sh@936 -- # '[' -z 60881 ']' 00:07:42.051 15:06:11 -- common/autotest_common.sh@940 -- # kill -0 60881 00:07:42.051 15:06:11 -- common/autotest_common.sh@941 -- # uname 00:07:42.051 15:06:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:42.051 15:06:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60881 00:07:42.051 killing process with pid 60881 00:07:42.051 Received shutdown signal, test time was about 10.000000 seconds 00:07:42.051 00:07:42.051 Latency(us) 00:07:42.051 [2024-11-06T15:06:11.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.051 [2024-11-06T15:06:11.326Z] =================================================================================================================== 00:07:42.051 [2024-11-06T15:06:11.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:42.051 15:06:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:42.051 15:06:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:42.051 15:06:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60881' 00:07:42.051 15:06:11 -- common/autotest_common.sh@955 -- # kill 60881 00:07:42.051 15:06:11 -- common/autotest_common.sh@960 -- # wait 60881 00:07:42.311 15:06:11 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.584 15:06:11 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:42.584 15:06:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:07:42.855 15:06:12 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:07:42.855 15:06:12 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:07:42.855 15:06:12 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:43.114 [2024-11-06 15:06:12.228209] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:43.114 15:06:12 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:43.114 15:06:12 -- common/autotest_common.sh@650 -- # local es=0 00:07:43.114 15:06:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:43.114 15:06:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.114 15:06:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.114 15:06:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.114 15:06:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.114 15:06:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.114 15:06:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.114 15:06:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.114 15:06:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:43.114 15:06:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:43.373 request: 00:07:43.373 { 00:07:43.373 "uuid": "3521d1d2-cbac-4d07-966c-ff1fe7c36cd2", 00:07:43.373 "method": "bdev_lvol_get_lvstores", 00:07:43.373 "req_id": 1 00:07:43.373 } 00:07:43.373 Got JSON-RPC error response 00:07:43.373 response: 00:07:43.373 { 00:07:43.373 "code": -19, 00:07:43.373 "message": "No such device" 00:07:43.373 } 00:07:43.373 15:06:12 -- common/autotest_common.sh@653 -- # es=1 00:07:43.373 15:06:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.373 15:06:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.373 15:06:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.373 15:06:12 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.631 aio_bdev 00:07:43.631 15:06:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 69cd4bea-f97d-4801-bb19-87d6fbd84f14 00:07:43.631 15:06:12 -- common/autotest_common.sh@897 -- # local bdev_name=69cd4bea-f97d-4801-bb19-87d6fbd84f14 00:07:43.631 15:06:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:43.631 15:06:12 -- common/autotest_common.sh@899 -- # local i 00:07:43.631 15:06:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:43.631 15:06:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:43.631 15:06:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:43.889 15:06:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69cd4bea-f97d-4801-bb19-87d6fbd84f14 -t 2000 00:07:44.148 [ 00:07:44.148 { 00:07:44.148 "name": "69cd4bea-f97d-4801-bb19-87d6fbd84f14", 00:07:44.148 "aliases": [ 00:07:44.148 "lvs/lvol" 00:07:44.148 ], 00:07:44.148 "product_name": "Logical Volume", 00:07:44.148 "block_size": 4096, 00:07:44.148 "num_blocks": 38912, 00:07:44.148 "uuid": "69cd4bea-f97d-4801-bb19-87d6fbd84f14", 00:07:44.148 "assigned_rate_limits": { 00:07:44.148 "rw_ios_per_sec": 0, 00:07:44.148 "rw_mbytes_per_sec": 0, 00:07:44.148 "r_mbytes_per_sec": 0, 00:07:44.148 "w_mbytes_per_sec": 0 00:07:44.148 }, 00:07:44.148 "claimed": false, 00:07:44.148 "zoned": false, 00:07:44.148 "supported_io_types": { 00:07:44.148 "read": true, 00:07:44.148 "write": true, 00:07:44.148 "unmap": true, 00:07:44.148 "write_zeroes": true, 00:07:44.148 "flush": false, 00:07:44.148 "reset": true, 00:07:44.148 "compare": false, 00:07:44.148 "compare_and_write": false, 00:07:44.148 "abort": false, 00:07:44.148 "nvme_admin": false, 00:07:44.148 "nvme_io": false 00:07:44.148 }, 00:07:44.148 "driver_specific": { 00:07:44.149 "lvol": { 00:07:44.149 "lvol_store_uuid": "3521d1d2-cbac-4d07-966c-ff1fe7c36cd2", 00:07:44.149 "base_bdev": "aio_bdev", 00:07:44.149 "thin_provision": false, 00:07:44.149 "snapshot": false, 00:07:44.149 "clone": false, 00:07:44.149 "esnap_clone": false 00:07:44.149 } 00:07:44.149 } 00:07:44.149 } 00:07:44.149 ] 00:07:44.149 15:06:13 -- common/autotest_common.sh@905 -- # return 0 00:07:44.149 15:06:13 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:07:44.149 15:06:13 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:44.407 15:06:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:07:44.407 15:06:13 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:44.407 15:06:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:07:44.665 15:06:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:07:44.665 15:06:13 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69cd4bea-f97d-4801-bb19-87d6fbd84f14 00:07:44.925 15:06:14 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3521d1d2-cbac-4d07-966c-ff1fe7c36cd2 00:07:45.184 15:06:14 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.442 15:06:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.701 ************************************ 00:07:45.701 END TEST lvs_grow_clean 00:07:45.701 ************************************ 00:07:45.701 00:07:45.701 real 0m17.902s 00:07:45.701 user 0m16.988s 00:07:45.701 sys 0m2.292s 00:07:45.701 15:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.701 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:45.701 15:06:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.701 15:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.701 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:07:45.701 ************************************ 00:07:45.701 START TEST lvs_grow_dirty 00:07:45.701 ************************************ 00:07:45.701 15:06:14 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.701 15:06:14 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.269 15:06:15 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.269 15:06:15 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:46.269 15:06:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:07:46.269 15:06:15 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:07:46.269 15:06:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:46.528 15:06:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:46.528 15:06:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:46.528 15:06:15 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 lvol 150 00:07:46.786 15:06:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:07:46.786 15:06:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:46.786 15:06:16 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.045 [2024-11-06 15:06:16.257347] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.045 [2024-11-06 15:06:16.257431] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.045 true 00:07:47.045 15:06:16 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:07:47.045 15:06:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.304 15:06:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.304 15:06:16 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.563 15:06:16 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:07:47.822 15:06:16 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.081 15:06:17 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.339 15:06:17 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:48.339 15:06:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61150 00:07:48.340 15:06:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.340 15:06:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61150 /var/tmp/bdevperf.sock 00:07:48.340 15:06:17 -- common/autotest_common.sh@829 -- # '[' -z 61150 ']' 00:07:48.340 15:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.340 15:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.340 15:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.340 15:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.340 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:07:48.340 [2024-11-06 15:06:17.433089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.340 [2024-11-06 15:06:17.433347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61150 ] 00:07:48.340 [2024-11-06 15:06:17.569861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.598 [2024-11-06 15:06:17.639235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.164 15:06:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.164 15:06:18 -- common/autotest_common.sh@862 -- # return 0 00:07:49.164 15:06:18 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.422 Nvme0n1 00:07:49.422 15:06:18 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.681 [ 00:07:49.681 { 00:07:49.681 "name": "Nvme0n1", 00:07:49.681 "aliases": [ 00:07:49.681 "1d8440ef-4afd-49ad-83c4-9d66711fb46c" 00:07:49.681 ], 00:07:49.681 "product_name": "NVMe disk", 00:07:49.681 "block_size": 4096, 00:07:49.681 "num_blocks": 38912, 00:07:49.681 "uuid": "1d8440ef-4afd-49ad-83c4-9d66711fb46c", 00:07:49.681 "assigned_rate_limits": { 00:07:49.681 "rw_ios_per_sec": 0, 00:07:49.681 "rw_mbytes_per_sec": 0, 00:07:49.681 "r_mbytes_per_sec": 0, 00:07:49.681 "w_mbytes_per_sec": 0 00:07:49.681 }, 00:07:49.681 "claimed": false, 00:07:49.681 "zoned": false, 00:07:49.681 "supported_io_types": { 00:07:49.681 "read": true, 00:07:49.681 "write": true, 00:07:49.681 "unmap": true, 00:07:49.681 "write_zeroes": true, 00:07:49.681 "flush": true, 00:07:49.681 "reset": true, 00:07:49.681 "compare": true, 00:07:49.681 "compare_and_write": true, 00:07:49.681 "abort": true, 00:07:49.681 "nvme_admin": true, 00:07:49.681 "nvme_io": true 00:07:49.681 }, 00:07:49.681 "driver_specific": { 00:07:49.681 "nvme": [ 00:07:49.681 { 00:07:49.681 "trid": { 00:07:49.681 "trtype": "TCP", 00:07:49.681 "adrfam": "IPv4", 00:07:49.681 "traddr": "10.0.0.2", 00:07:49.681 "trsvcid": "4420", 00:07:49.681 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.681 }, 00:07:49.681 "ctrlr_data": { 00:07:49.681 "cntlid": 1, 00:07:49.681 "vendor_id": "0x8086", 00:07:49.681 "model_number": "SPDK bdev Controller", 00:07:49.681 "serial_number": "SPDK0", 00:07:49.681 "firmware_revision": "24.01.1", 00:07:49.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.681 "oacs": { 00:07:49.681 "security": 0, 00:07:49.681 "format": 0, 00:07:49.681 "firmware": 0, 00:07:49.681 "ns_manage": 0 00:07:49.681 }, 00:07:49.681 "multi_ctrlr": true, 00:07:49.681 "ana_reporting": false 00:07:49.681 }, 00:07:49.681 "vs": { 00:07:49.681 "nvme_version": "1.3" 00:07:49.681 }, 00:07:49.681 "ns_data": { 00:07:49.681 "id": 1, 00:07:49.681 "can_share": true 00:07:49.681 } 00:07:49.681 } 00:07:49.681 ], 00:07:49.681 "mp_policy": "active_passive" 00:07:49.681 } 00:07:49.681 } 00:07:49.681 ] 00:07:49.681 15:06:18 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61168 00:07:49.681 15:06:18 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.681 15:06:18 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:49.681 Running I/O for 10 seconds... 00:07:51.058 Latency(us) 00:07:51.058 [2024-11-06T15:06:20.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.058 [2024-11-06T15:06:20.333Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.058 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:51.058 [2024-11-06T15:06:20.333Z] =================================================================================================================== 00:07:51.058 [2024-11-06T15:06:20.333Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:51.058 00:07:51.626 15:06:20 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:07:51.885 [2024-11-06T15:06:21.160Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.885 Nvme0n1 : 2.00 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:51.885 [2024-11-06T15:06:21.160Z] =================================================================================================================== 00:07:51.885 [2024-11-06T15:06:21.160Z] Total : 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:51.885 00:07:52.144 true 00:07:52.144 15:06:21 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:07:52.144 15:06:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.403 15:06:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.403 15:06:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.403 15:06:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 61168 00:07:52.970 [2024-11-06T15:06:22.245Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.970 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:07:52.970 [2024-11-06T15:06:22.245Z] =================================================================================================================== 00:07:52.970 [2024-11-06T15:06:22.245Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:07:52.970 00:07:53.907 [2024-11-06T15:06:23.182Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.907 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:53.907 [2024-11-06T15:06:23.182Z] =================================================================================================================== 00:07:53.907 [2024-11-06T15:06:23.182Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:53.907 00:07:54.844 [2024-11-06T15:06:24.119Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.844 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:54.844 [2024-11-06T15:06:24.119Z] =================================================================================================================== 00:07:54.844 [2024-11-06T15:06:24.119Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:54.844 00:07:55.781 [2024-11-06T15:06:25.056Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.781 Nvme0n1 : 6.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:55.781 [2024-11-06T15:06:25.056Z] =================================================================================================================== 00:07:55.781 [2024-11-06T15:06:25.056Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:55.781 00:07:56.719 [2024-11-06T15:06:25.994Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.719 Nvme0n1 : 7.00 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:07:56.719 [2024-11-06T15:06:25.994Z] =================================================================================================================== 00:07:56.719 [2024-11-06T15:06:25.994Z] Total : 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:07:56.719 00:07:58.096 [2024-11-06T15:06:27.371Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.096 Nvme0n1 : 8.00 6410.38 25.04 0.00 0.00 0.00 0.00 0.00 00:07:58.096 [2024-11-06T15:06:27.371Z] =================================================================================================================== 00:07:58.096 [2024-11-06T15:06:27.371Z] Total : 6410.38 25.04 0.00 0.00 0.00 0.00 0.00 00:07:58.096 00:07:59.033 [2024-11-06T15:06:28.308Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.033 Nvme0n1 : 9.00 6333.11 24.74 0.00 0.00 0.00 0.00 0.00 00:07:59.033 [2024-11-06T15:06:28.308Z] =================================================================================================================== 00:07:59.033 [2024-11-06T15:06:28.308Z] Total : 6333.11 24.74 0.00 0.00 0.00 0.00 0.00 00:07:59.033 00:07:59.970 [2024-11-06T15:06:29.245Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.970 Nvme0n1 : 10.00 6334.80 24.75 0.00 0.00 0.00 0.00 0.00 00:07:59.970 [2024-11-06T15:06:29.245Z] =================================================================================================================== 00:07:59.970 [2024-11-06T15:06:29.245Z] Total : 6334.80 24.75 0.00 0.00 0.00 0.00 0.00 00:07:59.970 00:07:59.970 00:07:59.970 Latency(us) 00:07:59.970 [2024-11-06T15:06:29.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.970 [2024-11-06T15:06:29.245Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.970 Nvme0n1 : 10.02 6333.48 24.74 0.00 0.00 20204.72 10902.81 167772.16 00:07:59.970 [2024-11-06T15:06:29.245Z] =================================================================================================================== 00:07:59.970 [2024-11-06T15:06:29.245Z] Total : 6333.48 24.74 0.00 0.00 20204.72 10902.81 167772.16 00:07:59.970 0 00:07:59.970 15:06:28 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61150 00:07:59.970 15:06:28 -- common/autotest_common.sh@936 -- # '[' -z 61150 ']' 00:07:59.970 15:06:28 -- common/autotest_common.sh@940 -- # kill -0 61150 00:07:59.970 15:06:28 -- common/autotest_common.sh@941 -- # uname 00:07:59.970 15:06:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.970 15:06:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61150 00:07:59.970 killing process with pid 61150 00:07:59.970 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.970 00:07:59.970 Latency(us) 00:07:59.970 [2024-11-06T15:06:29.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.970 [2024-11-06T15:06:29.245Z] =================================================================================================================== 00:07:59.970 [2024-11-06T15:06:29.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.970 15:06:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:59.970 15:06:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:59.970 15:06:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61150' 00:07:59.970 15:06:29 -- common/autotest_common.sh@955 -- # kill 61150 00:07:59.970 15:06:29 -- common/autotest_common.sh@960 -- # wait 61150 00:07:59.970 15:06:29 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60793 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@74 -- # wait 60793 00:08:00.537 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60793 Killed "${NVMF_APP[@]}" "$@" 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:00.537 15:06:29 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:00.537 15:06:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:00.537 15:06:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.537 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:08:00.795 15:06:29 -- nvmf/common.sh@469 -- # nvmfpid=61300 00:08:00.795 15:06:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:00.795 15:06:29 -- nvmf/common.sh@470 -- # waitforlisten 61300 00:08:00.795 15:06:29 -- common/autotest_common.sh@829 -- # '[' -z 61300 ']' 00:08:00.795 15:06:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.795 15:06:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.795 15:06:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.795 15:06:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.795 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:08:00.795 [2024-11-06 15:06:29.862673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.795 [2024-11-06 15:06:29.862763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.795 [2024-11-06 15:06:29.991661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.796 [2024-11-06 15:06:30.045535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:00.796 [2024-11-06 15:06:30.045731] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.796 [2024-11-06 15:06:30.045747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.796 [2024-11-06 15:06:30.045756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.796 [2024-11-06 15:06:30.045797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.732 15:06:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.732 15:06:30 -- common/autotest_common.sh@862 -- # return 0 00:08:01.732 15:06:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:01.732 15:06:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.732 15:06:30 -- common/autotest_common.sh@10 -- # set +x 00:08:01.732 15:06:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.732 15:06:30 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.991 [2024-11-06 15:06:31.118619] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:01.991 [2024-11-06 15:06:31.119010] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:01.991 [2024-11-06 15:06:31.119586] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:01.991 15:06:31 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:01.991 15:06:31 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:08:01.991 15:06:31 -- common/autotest_common.sh@897 -- # local bdev_name=1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:08:01.991 15:06:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:01.991 15:06:31 -- common/autotest_common.sh@899 -- # local i 00:08:01.991 15:06:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:01.991 15:06:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:01.991 15:06:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.251 15:06:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d8440ef-4afd-49ad-83c4-9d66711fb46c -t 2000 00:08:02.510 [ 00:08:02.510 { 00:08:02.510 "name": "1d8440ef-4afd-49ad-83c4-9d66711fb46c", 00:08:02.510 "aliases": [ 00:08:02.510 "lvs/lvol" 00:08:02.510 ], 00:08:02.510 "product_name": "Logical Volume", 00:08:02.510 "block_size": 4096, 00:08:02.510 "num_blocks": 38912, 00:08:02.510 "uuid": "1d8440ef-4afd-49ad-83c4-9d66711fb46c", 00:08:02.510 "assigned_rate_limits": { 00:08:02.510 "rw_ios_per_sec": 0, 00:08:02.510 "rw_mbytes_per_sec": 0, 00:08:02.510 "r_mbytes_per_sec": 0, 00:08:02.510 "w_mbytes_per_sec": 0 00:08:02.510 }, 00:08:02.510 "claimed": false, 00:08:02.510 "zoned": false, 00:08:02.510 "supported_io_types": { 00:08:02.510 "read": true, 00:08:02.510 "write": true, 00:08:02.510 "unmap": true, 00:08:02.510 "write_zeroes": true, 00:08:02.510 "flush": false, 00:08:02.510 "reset": true, 00:08:02.510 "compare": false, 00:08:02.510 "compare_and_write": false, 00:08:02.510 "abort": false, 00:08:02.510 "nvme_admin": false, 00:08:02.510 "nvme_io": false 00:08:02.510 }, 00:08:02.510 "driver_specific": { 00:08:02.510 "lvol": { 00:08:02.510 "lvol_store_uuid": "5e7ad628-6718-4ac7-80c9-6b7441cf9294", 00:08:02.510 "base_bdev": "aio_bdev", 00:08:02.510 "thin_provision": false, 00:08:02.510 "snapshot": false, 00:08:02.510 "clone": false, 00:08:02.510 "esnap_clone": false 00:08:02.510 } 00:08:02.510 } 00:08:02.510 } 00:08:02.510 ] 00:08:02.510 15:06:31 -- common/autotest_common.sh@905 -- # return 0 00:08:02.510 15:06:31 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:02.510 15:06:31 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:02.769 15:06:31 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:02.769 15:06:31 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:02.769 15:06:31 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:03.028 15:06:32 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:03.028 15:06:32 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.287 [2024-11-06 15:06:32.424652] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:03.287 15:06:32 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:03.287 15:06:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:03.287 15:06:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:03.287 15:06:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.287 15:06:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.287 15:06:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.287 15:06:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.287 15:06:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.287 15:06:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.287 15:06:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.287 15:06:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.287 15:06:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:03.546 request: 00:08:03.546 { 00:08:03.546 "uuid": "5e7ad628-6718-4ac7-80c9-6b7441cf9294", 00:08:03.546 "method": "bdev_lvol_get_lvstores", 00:08:03.546 "req_id": 1 00:08:03.546 } 00:08:03.546 Got JSON-RPC error response 00:08:03.546 response: 00:08:03.546 { 00:08:03.546 "code": -19, 00:08:03.546 "message": "No such device" 00:08:03.546 } 00:08:03.546 15:06:32 -- common/autotest_common.sh@653 -- # es=1 00:08:03.546 15:06:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.546 15:06:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.546 15:06:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.546 15:06:32 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.807 aio_bdev 00:08:03.807 15:06:32 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:08:03.807 15:06:32 -- common/autotest_common.sh@897 -- # local bdev_name=1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:08:03.807 15:06:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:03.807 15:06:32 -- common/autotest_common.sh@899 -- # local i 00:08:03.807 15:06:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:03.807 15:06:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:03.807 15:06:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.066 15:06:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d8440ef-4afd-49ad-83c4-9d66711fb46c -t 2000 00:08:04.325 [ 00:08:04.325 { 00:08:04.325 "name": "1d8440ef-4afd-49ad-83c4-9d66711fb46c", 00:08:04.325 "aliases": [ 00:08:04.325 "lvs/lvol" 00:08:04.325 ], 00:08:04.325 "product_name": "Logical Volume", 00:08:04.325 "block_size": 4096, 00:08:04.325 "num_blocks": 38912, 00:08:04.325 "uuid": "1d8440ef-4afd-49ad-83c4-9d66711fb46c", 00:08:04.325 "assigned_rate_limits": { 00:08:04.325 "rw_ios_per_sec": 0, 00:08:04.325 "rw_mbytes_per_sec": 0, 00:08:04.325 "r_mbytes_per_sec": 0, 00:08:04.325 "w_mbytes_per_sec": 0 00:08:04.325 }, 00:08:04.325 "claimed": false, 00:08:04.325 "zoned": false, 00:08:04.325 "supported_io_types": { 00:08:04.325 "read": true, 00:08:04.325 "write": true, 00:08:04.325 "unmap": true, 00:08:04.325 "write_zeroes": true, 00:08:04.325 "flush": false, 00:08:04.325 "reset": true, 00:08:04.325 "compare": false, 00:08:04.325 "compare_and_write": false, 00:08:04.325 "abort": false, 00:08:04.325 "nvme_admin": false, 00:08:04.325 "nvme_io": false 00:08:04.325 }, 00:08:04.325 "driver_specific": { 00:08:04.325 "lvol": { 00:08:04.325 "lvol_store_uuid": "5e7ad628-6718-4ac7-80c9-6b7441cf9294", 00:08:04.325 "base_bdev": "aio_bdev", 00:08:04.325 "thin_provision": false, 00:08:04.325 "snapshot": false, 00:08:04.325 "clone": false, 00:08:04.325 "esnap_clone": false 00:08:04.325 } 00:08:04.325 } 00:08:04.325 } 00:08:04.325 ] 00:08:04.325 15:06:33 -- common/autotest_common.sh@905 -- # return 0 00:08:04.325 15:06:33 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:04.325 15:06:33 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:04.585 15:06:33 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:04.585 15:06:33 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:04.585 15:06:33 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:04.844 15:06:34 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:04.844 15:06:34 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1d8440ef-4afd-49ad-83c4-9d66711fb46c 00:08:05.103 15:06:34 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e7ad628-6718-4ac7-80c9-6b7441cf9294 00:08:05.362 15:06:34 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.634 15:06:34 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.893 ************************************ 00:08:05.893 END TEST lvs_grow_dirty 00:08:05.893 ************************************ 00:08:05.893 00:08:05.893 real 0m20.114s 00:08:05.893 user 0m40.439s 00:08:05.893 sys 0m8.929s 00:08:05.893 15:06:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.893 15:06:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.893 15:06:35 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:05.893 15:06:35 -- common/autotest_common.sh@806 -- # type=--id 00:08:05.893 15:06:35 -- common/autotest_common.sh@807 -- # id=0 00:08:05.893 15:06:35 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:05.893 15:06:35 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:05.893 15:06:35 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:05.893 15:06:35 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:05.893 15:06:35 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:05.893 15:06:35 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:05.893 nvmf_trace.0 00:08:05.893 15:06:35 -- common/autotest_common.sh@821 -- # return 0 00:08:05.893 15:06:35 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:05.893 15:06:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:05.893 15:06:35 -- nvmf/common.sh@116 -- # sync 00:08:06.828 15:06:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:06.828 15:06:35 -- nvmf/common.sh@119 -- # set +e 00:08:06.828 15:06:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:06.828 15:06:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:06.828 rmmod nvme_tcp 00:08:06.828 rmmod nvme_fabrics 00:08:06.828 rmmod nvme_keyring 00:08:06.828 15:06:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:06.828 15:06:35 -- nvmf/common.sh@123 -- # set -e 00:08:06.828 15:06:35 -- nvmf/common.sh@124 -- # return 0 00:08:06.828 15:06:35 -- nvmf/common.sh@477 -- # '[' -n 61300 ']' 00:08:06.828 15:06:35 -- nvmf/common.sh@478 -- # killprocess 61300 00:08:06.828 15:06:35 -- common/autotest_common.sh@936 -- # '[' -z 61300 ']' 00:08:06.828 15:06:35 -- common/autotest_common.sh@940 -- # kill -0 61300 00:08:06.828 15:06:35 -- common/autotest_common.sh@941 -- # uname 00:08:06.828 15:06:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.828 15:06:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61300 00:08:06.828 killing process with pid 61300 00:08:06.828 15:06:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.828 15:06:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.828 15:06:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61300' 00:08:06.828 15:06:35 -- common/autotest_common.sh@955 -- # kill 61300 00:08:06.828 15:06:35 -- common/autotest_common.sh@960 -- # wait 61300 00:08:06.828 15:06:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:06.828 15:06:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:06.828 15:06:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:06.828 15:06:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.828 15:06:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:06.828 15:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.828 15:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.828 15:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.828 15:06:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:06.828 ************************************ 00:08:06.828 END TEST nvmf_lvs_grow 00:08:06.828 ************************************ 00:08:06.828 00:08:06.828 real 0m41.026s 00:08:06.828 user 1m4.377s 00:08:06.828 sys 0m12.350s 00:08:06.828 15:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.828 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:08:07.088 15:06:36 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:07.088 15:06:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.088 15:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.088 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:08:07.088 ************************************ 00:08:07.088 START TEST nvmf_bdev_io_wait 00:08:07.088 ************************************ 00:08:07.088 15:06:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:07.088 * Looking for test storage... 00:08:07.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.088 15:06:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:07.088 15:06:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:07.088 15:06:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:07.088 15:06:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:07.088 15:06:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:07.088 15:06:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:07.088 15:06:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:07.088 15:06:36 -- scripts/common.sh@335 -- # IFS=.-: 00:08:07.088 15:06:36 -- scripts/common.sh@335 -- # read -ra ver1 00:08:07.088 15:06:36 -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.088 15:06:36 -- scripts/common.sh@336 -- # read -ra ver2 00:08:07.088 15:06:36 -- scripts/common.sh@337 -- # local 'op=<' 00:08:07.088 15:06:36 -- scripts/common.sh@339 -- # ver1_l=2 00:08:07.088 15:06:36 -- scripts/common.sh@340 -- # ver2_l=1 00:08:07.088 15:06:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:07.088 15:06:36 -- scripts/common.sh@343 -- # case "$op" in 00:08:07.088 15:06:36 -- scripts/common.sh@344 -- # : 1 00:08:07.088 15:06:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:07.088 15:06:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.088 15:06:36 -- scripts/common.sh@364 -- # decimal 1 00:08:07.088 15:06:36 -- scripts/common.sh@352 -- # local d=1 00:08:07.088 15:06:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.088 15:06:36 -- scripts/common.sh@354 -- # echo 1 00:08:07.088 15:06:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:07.088 15:06:36 -- scripts/common.sh@365 -- # decimal 2 00:08:07.088 15:06:36 -- scripts/common.sh@352 -- # local d=2 00:08:07.088 15:06:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.088 15:06:36 -- scripts/common.sh@354 -- # echo 2 00:08:07.088 15:06:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:07.088 15:06:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:07.088 15:06:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:07.088 15:06:36 -- scripts/common.sh@367 -- # return 0 00:08:07.088 15:06:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.088 15:06:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:07.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.088 --rc genhtml_branch_coverage=1 00:08:07.088 --rc genhtml_function_coverage=1 00:08:07.088 --rc genhtml_legend=1 00:08:07.088 --rc geninfo_all_blocks=1 00:08:07.088 --rc geninfo_unexecuted_blocks=1 00:08:07.088 00:08:07.088 ' 00:08:07.088 15:06:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:07.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.088 --rc genhtml_branch_coverage=1 00:08:07.088 --rc genhtml_function_coverage=1 00:08:07.088 --rc genhtml_legend=1 00:08:07.088 --rc geninfo_all_blocks=1 00:08:07.088 --rc geninfo_unexecuted_blocks=1 00:08:07.088 00:08:07.088 ' 00:08:07.088 15:06:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:07.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.088 --rc genhtml_branch_coverage=1 00:08:07.088 --rc genhtml_function_coverage=1 00:08:07.088 --rc genhtml_legend=1 00:08:07.088 --rc geninfo_all_blocks=1 00:08:07.088 --rc geninfo_unexecuted_blocks=1 00:08:07.088 00:08:07.088 ' 00:08:07.088 15:06:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:07.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.088 --rc genhtml_branch_coverage=1 00:08:07.088 --rc genhtml_function_coverage=1 00:08:07.088 --rc genhtml_legend=1 00:08:07.088 --rc geninfo_all_blocks=1 00:08:07.088 --rc geninfo_unexecuted_blocks=1 00:08:07.088 00:08:07.088 ' 00:08:07.088 15:06:36 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.088 15:06:36 -- nvmf/common.sh@7 -- # uname -s 00:08:07.088 15:06:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.088 15:06:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.088 15:06:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.088 15:06:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.088 15:06:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.088 15:06:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.088 15:06:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.088 15:06:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.088 15:06:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.088 15:06:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.088 15:06:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:08:07.088 15:06:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:08:07.088 15:06:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.088 15:06:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.088 15:06:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.088 15:06:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.088 15:06:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.088 15:06:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.088 15:06:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.088 15:06:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.088 15:06:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.088 15:06:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.088 15:06:36 -- paths/export.sh@5 -- # export PATH 00:08:07.089 15:06:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.089 15:06:36 -- nvmf/common.sh@46 -- # : 0 00:08:07.089 15:06:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.089 15:06:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.089 15:06:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.089 15:06:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.089 15:06:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.089 15:06:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.089 15:06:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.089 15:06:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.089 15:06:36 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.089 15:06:36 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.089 15:06:36 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:07.089 15:06:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.089 15:06:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.089 15:06:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.089 15:06:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.089 15:06:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.089 15:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.089 15:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.089 15:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.089 15:06:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:07.089 15:06:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:07.089 15:06:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:07.089 15:06:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:07.089 15:06:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:07.089 15:06:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:07.089 15:06:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.089 15:06:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.089 15:06:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.089 15:06:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:07.089 15:06:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.089 15:06:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.089 15:06:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.089 15:06:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.089 15:06:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.089 15:06:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.089 15:06:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.089 15:06:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.089 15:06:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:07.089 15:06:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:07.089 Cannot find device "nvmf_tgt_br" 00:08:07.348 15:06:36 -- nvmf/common.sh@154 -- # true 00:08:07.348 15:06:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.348 Cannot find device "nvmf_tgt_br2" 00:08:07.348 15:06:36 -- nvmf/common.sh@155 -- # true 00:08:07.348 15:06:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:07.348 15:06:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:07.348 Cannot find device "nvmf_tgt_br" 00:08:07.348 15:06:36 -- nvmf/common.sh@157 -- # true 00:08:07.348 15:06:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:07.348 Cannot find device "nvmf_tgt_br2" 00:08:07.348 15:06:36 -- nvmf/common.sh@158 -- # true 00:08:07.348 15:06:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:07.348 15:06:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:07.348 15:06:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.348 15:06:36 -- nvmf/common.sh@161 -- # true 00:08:07.348 15:06:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.348 15:06:36 -- nvmf/common.sh@162 -- # true 00:08:07.348 15:06:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.348 15:06:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.348 15:06:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.348 15:06:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.348 15:06:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.348 15:06:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.348 15:06:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.348 15:06:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:07.348 15:06:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:07.348 15:06:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:07.348 15:06:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:07.348 15:06:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:07.348 15:06:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:07.348 15:06:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.348 15:06:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.348 15:06:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.607 15:06:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:07.607 15:06:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:07.607 15:06:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.607 15:06:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.607 15:06:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.607 15:06:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.607 15:06:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.607 15:06:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:07.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:07.607 00:08:07.607 --- 10.0.0.2 ping statistics --- 00:08:07.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.607 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:07.607 15:06:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:07.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:07.607 00:08:07.607 --- 10.0.0.3 ping statistics --- 00:08:07.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.607 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:07.607 15:06:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:07.607 00:08:07.607 --- 10.0.0.1 ping statistics --- 00:08:07.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.607 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:07.607 15:06:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.607 15:06:36 -- nvmf/common.sh@421 -- # return 0 00:08:07.607 15:06:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.607 15:06:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.607 15:06:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.607 15:06:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.607 15:06:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.607 15:06:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.607 15:06:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.607 15:06:36 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:07.607 15:06:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.607 15:06:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.607 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:08:07.607 15:06:36 -- nvmf/common.sh@469 -- # nvmfpid=61622 00:08:07.607 15:06:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:07.607 15:06:36 -- nvmf/common.sh@470 -- # waitforlisten 61622 00:08:07.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.607 15:06:36 -- common/autotest_common.sh@829 -- # '[' -z 61622 ']' 00:08:07.607 15:06:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.607 15:06:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.607 15:06:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.607 15:06:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.607 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:08:07.607 [2024-11-06 15:06:36.773923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.607 [2024-11-06 15:06:36.774010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.866 [2024-11-06 15:06:36.914795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.866 [2024-11-06 15:06:36.969020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.866 [2024-11-06 15:06:36.969381] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.866 [2024-11-06 15:06:36.969514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.866 [2024-11-06 15:06:36.969696] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.866 [2024-11-06 15:06:36.969980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.866 [2024-11-06 15:06:36.970042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.866 [2024-11-06 15:06:36.970080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.866 [2024-11-06 15:06:36.970095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.866 15:06:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.866 15:06:37 -- common/autotest_common.sh@862 -- # return 0 00:08:07.866 15:06:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:07.866 15:06:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.866 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.866 15:06:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.867 15:06:37 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:07.867 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.867 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.867 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.867 15:06:37 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:07.867 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.867 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.867 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.867 15:06:37 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.867 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.867 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.867 [2024-11-06 15:06:37.118802] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.867 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.867 15:06:37 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:07.867 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.867 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:08.126 Malloc0 00:08:08.126 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.126 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.126 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:08.126 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.126 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.126 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:08.126 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.126 15:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.126 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:08:08.126 [2024-11-06 15:06:37.175738] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.126 15:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61651 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:08.126 15:06:37 -- nvmf/common.sh@520 -- # config=() 00:08:08.126 15:06:37 -- nvmf/common.sh@520 -- # local subsystem config 00:08:08.126 15:06:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@30 -- # READ_PID=61653 00:08:08.126 15:06:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:08.126 { 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme$subsystem", 00:08:08.126 "trtype": "$TEST_TRANSPORT", 00:08:08.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "$NVMF_PORT", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.126 "hdgst": ${hdgst:-false}, 00:08:08.126 "ddgst": ${ddgst:-false} 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 } 00:08:08.126 EOF 00:08:08.126 )") 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61655 00:08:08.126 15:06:37 -- nvmf/common.sh@542 -- # cat 00:08:08.126 15:06:37 -- nvmf/common.sh@520 -- # config=() 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61658 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@35 -- # sync 00:08:08.126 15:06:37 -- nvmf/common.sh@520 -- # local subsystem config 00:08:08.126 15:06:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:08.126 15:06:37 -- nvmf/common.sh@520 -- # config=() 00:08:08.126 15:06:37 -- nvmf/common.sh@520 -- # local subsystem config 00:08:08.126 15:06:37 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:08.126 15:06:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:08.126 15:06:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:08.126 { 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme$subsystem", 00:08:08.126 "trtype": "$TEST_TRANSPORT", 00:08:08.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "$NVMF_PORT", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.126 "hdgst": ${hdgst:-false}, 00:08:08.126 "ddgst": ${ddgst:-false} 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 } 00:08:08.126 EOF 00:08:08.126 )") 00:08:08.126 15:06:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:08.126 { 00:08:08.126 "params": { 00:08:08.127 "name": "Nvme$subsystem", 00:08:08.127 "trtype": "$TEST_TRANSPORT", 00:08:08.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.127 "adrfam": "ipv4", 00:08:08.127 "trsvcid": "$NVMF_PORT", 00:08:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.127 "hdgst": ${hdgst:-false}, 00:08:08.127 "ddgst": ${ddgst:-false} 00:08:08.127 }, 00:08:08.127 "method": "bdev_nvme_attach_controller" 00:08:08.127 } 00:08:08.127 EOF 00:08:08.127 )") 00:08:08.127 15:06:37 -- nvmf/common.sh@544 -- # jq . 00:08:08.127 15:06:37 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:08.127 15:06:37 -- nvmf/common.sh@520 -- # config=() 00:08:08.127 15:06:37 -- nvmf/common.sh@520 -- # local subsystem config 00:08:08.127 15:06:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:08.127 15:06:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:08.127 { 00:08:08.127 "params": { 00:08:08.127 "name": "Nvme$subsystem", 00:08:08.127 "trtype": "$TEST_TRANSPORT", 00:08:08.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.127 "adrfam": "ipv4", 00:08:08.127 "trsvcid": "$NVMF_PORT", 00:08:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.127 "hdgst": ${hdgst:-false}, 00:08:08.127 "ddgst": ${ddgst:-false} 00:08:08.127 }, 00:08:08.127 "method": "bdev_nvme_attach_controller" 00:08:08.127 } 00:08:08.127 EOF 00:08:08.127 )") 00:08:08.127 15:06:37 -- nvmf/common.sh@542 -- # cat 00:08:08.127 15:06:37 -- nvmf/common.sh@545 -- # IFS=, 00:08:08.127 15:06:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:08.127 "params": { 00:08:08.127 "name": "Nvme1", 00:08:08.127 "trtype": "tcp", 00:08:08.127 "traddr": "10.0.0.2", 00:08:08.127 "adrfam": "ipv4", 00:08:08.127 "trsvcid": "4420", 00:08:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.127 "hdgst": false, 00:08:08.127 "ddgst": false 00:08:08.127 }, 00:08:08.127 "method": "bdev_nvme_attach_controller" 00:08:08.127 }' 00:08:08.127 15:06:37 -- nvmf/common.sh@542 -- # cat 00:08:08.127 15:06:37 -- nvmf/common.sh@542 -- # cat 00:08:08.127 15:06:37 -- nvmf/common.sh@544 -- # jq . 00:08:08.127 15:06:37 -- nvmf/common.sh@544 -- # jq . 00:08:08.127 15:06:37 -- nvmf/common.sh@545 -- # IFS=, 00:08:08.127 15:06:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:08.127 "params": { 00:08:08.127 "name": "Nvme1", 00:08:08.127 "trtype": "tcp", 00:08:08.127 "traddr": "10.0.0.2", 00:08:08.127 "adrfam": "ipv4", 00:08:08.127 "trsvcid": "4420", 00:08:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.127 "hdgst": false, 00:08:08.127 "ddgst": false 00:08:08.127 }, 00:08:08.127 "method": "bdev_nvme_attach_controller" 00:08:08.127 }' 00:08:08.127 15:06:37 -- nvmf/common.sh@545 -- # IFS=, 00:08:08.127 15:06:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:08.127 "params": { 00:08:08.127 "name": "Nvme1", 00:08:08.127 "trtype": "tcp", 00:08:08.127 "traddr": "10.0.0.2", 00:08:08.127 "adrfam": "ipv4", 00:08:08.127 "trsvcid": "4420", 00:08:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.127 "hdgst": false, 00:08:08.127 "ddgst": false 00:08:08.127 }, 00:08:08.127 "method": "bdev_nvme_attach_controller" 00:08:08.127 }' 00:08:08.127 15:06:37 -- nvmf/common.sh@544 -- # jq . 00:08:08.127 15:06:37 -- nvmf/common.sh@545 -- # IFS=, 00:08:08.127 15:06:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:08.127 "params": { 00:08:08.127 "name": "Nvme1", 00:08:08.127 "trtype": "tcp", 00:08:08.127 "traddr": "10.0.0.2", 00:08:08.127 "adrfam": "ipv4", 00:08:08.127 "trsvcid": "4420", 00:08:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.127 "hdgst": false, 00:08:08.127 "ddgst": false 00:08:08.127 }, 00:08:08.127 "method": "bdev_nvme_attach_controller" 00:08:08.127 }' 00:08:08.127 15:06:37 -- target/bdev_io_wait.sh@37 -- # wait 61651 00:08:08.127 [2024-11-06 15:06:37.238839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.127 [2024-11-06 15:06:37.239066] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:08.127 [2024-11-06 15:06:37.256719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.127 [2024-11-06 15:06:37.256946] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:08.127 [2024-11-06 15:06:37.260306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.127 [2024-11-06 15:06:37.260754] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:08.127 [2024-11-06 15:06:37.295411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.127 [2024-11-06 15:06:37.295937] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:08.386 [2024-11-06 15:06:37.422411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-11-06 15:06:37.465137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-11-06 15:06:37.476084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:08.386 [2024-11-06 15:06:37.511390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-11-06 15:06:37.518087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:08.386 [2024-11-06 15:06:37.555194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-11-06 15:06:37.564948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.386 Running I/O for 1 seconds... 00:08:08.386 [2024-11-06 15:06:37.607764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:08.386 Running I/O for 1 seconds... 00:08:08.645 Running I/O for 1 seconds... 00:08:08.645 Running I/O for 1 seconds... 00:08:09.582 00:08:09.582 Latency(us) 00:08:09.582 [2024-11-06T15:06:38.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.582 [2024-11-06T15:06:38.857Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:09.582 Nvme1n1 : 1.00 172299.25 673.04 0.00 0.00 740.32 350.02 2442.71 00:08:09.582 [2024-11-06T15:06:38.857Z] =================================================================================================================== 00:08:09.582 [2024-11-06T15:06:38.857Z] Total : 172299.25 673.04 0.00 0.00 740.32 350.02 2442.71 00:08:09.582 00:08:09.582 Latency(us) 00:08:09.582 [2024-11-06T15:06:38.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.582 [2024-11-06T15:06:38.857Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:09.582 Nvme1n1 : 1.01 11837.37 46.24 0.00 0.00 10776.48 5689.72 19660.80 00:08:09.582 [2024-11-06T15:06:38.857Z] =================================================================================================================== 00:08:09.582 [2024-11-06T15:06:38.857Z] Total : 11837.37 46.24 0.00 0.00 10776.48 5689.72 19660.80 00:08:09.582 00:08:09.582 Latency(us) 00:08:09.582 [2024-11-06T15:06:38.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.582 [2024-11-06T15:06:38.857Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:09.582 Nvme1n1 : 1.01 8168.22 31.91 0.00 0.00 15601.37 7983.48 27167.65 00:08:09.582 [2024-11-06T15:06:38.857Z] =================================================================================================================== 00:08:09.582 [2024-11-06T15:06:38.857Z] Total : 8168.22 31.91 0.00 0.00 15601.37 7983.48 27167.65 00:08:09.582 00:08:09.582 Latency(us) 00:08:09.582 [2024-11-06T15:06:38.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.582 [2024-11-06T15:06:38.857Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:09.582 Nvme1n1 : 1.01 7789.30 30.43 0.00 0.00 16351.42 9055.88 28001.75 00:08:09.582 [2024-11-06T15:06:38.857Z] =================================================================================================================== 00:08:09.582 [2024-11-06T15:06:38.857Z] Total : 7789.30 30.43 0.00 0.00 16351.42 9055.88 28001.75 00:08:09.841 15:06:38 -- target/bdev_io_wait.sh@38 -- # wait 61653 00:08:09.841 15:06:38 -- target/bdev_io_wait.sh@39 -- # wait 61655 00:08:09.841 15:06:38 -- target/bdev_io_wait.sh@40 -- # wait 61658 00:08:09.841 15:06:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.841 15:06:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.841 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:08:09.841 15:06:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.841 15:06:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:09.841 15:06:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:09.841 15:06:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:09.841 15:06:38 -- nvmf/common.sh@116 -- # sync 00:08:09.841 15:06:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:09.841 15:06:38 -- nvmf/common.sh@119 -- # set +e 00:08:09.841 15:06:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:09.841 15:06:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:09.841 rmmod nvme_tcp 00:08:09.841 rmmod nvme_fabrics 00:08:09.841 rmmod nvme_keyring 00:08:09.841 15:06:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:09.841 15:06:39 -- nvmf/common.sh@123 -- # set -e 00:08:09.841 15:06:39 -- nvmf/common.sh@124 -- # return 0 00:08:09.841 15:06:39 -- nvmf/common.sh@477 -- # '[' -n 61622 ']' 00:08:09.841 15:06:39 -- nvmf/common.sh@478 -- # killprocess 61622 00:08:09.841 15:06:39 -- common/autotest_common.sh@936 -- # '[' -z 61622 ']' 00:08:09.841 15:06:39 -- common/autotest_common.sh@940 -- # kill -0 61622 00:08:09.841 15:06:39 -- common/autotest_common.sh@941 -- # uname 00:08:09.841 15:06:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:09.841 15:06:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61622 00:08:09.841 killing process with pid 61622 00:08:09.841 15:06:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:09.841 15:06:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:09.841 15:06:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61622' 00:08:09.841 15:06:39 -- common/autotest_common.sh@955 -- # kill 61622 00:08:09.841 15:06:39 -- common/autotest_common.sh@960 -- # wait 61622 00:08:10.101 15:06:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:10.101 15:06:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:10.101 15:06:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:10.101 15:06:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.101 15:06:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:10.101 15:06:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.101 15:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.101 15:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.101 15:06:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:10.101 ************************************ 00:08:10.101 END TEST nvmf_bdev_io_wait 00:08:10.101 ************************************ 00:08:10.101 00:08:10.101 real 0m3.141s 00:08:10.101 user 0m13.465s 00:08:10.101 sys 0m1.890s 00:08:10.101 15:06:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.101 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.101 15:06:39 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:10.101 15:06:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:10.101 15:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.101 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.101 ************************************ 00:08:10.101 START TEST nvmf_queue_depth 00:08:10.101 ************************************ 00:08:10.101 15:06:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:10.360 * Looking for test storage... 00:08:10.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:10.360 15:06:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:10.360 15:06:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:10.360 15:06:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:10.360 15:06:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:10.360 15:06:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:10.360 15:06:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:10.360 15:06:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:10.360 15:06:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:10.360 15:06:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:10.360 15:06:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.360 15:06:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:10.360 15:06:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:10.360 15:06:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:10.360 15:06:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:10.360 15:06:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:10.360 15:06:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:10.360 15:06:39 -- scripts/common.sh@344 -- # : 1 00:08:10.360 15:06:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:10.360 15:06:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.360 15:06:39 -- scripts/common.sh@364 -- # decimal 1 00:08:10.360 15:06:39 -- scripts/common.sh@352 -- # local d=1 00:08:10.360 15:06:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.360 15:06:39 -- scripts/common.sh@354 -- # echo 1 00:08:10.360 15:06:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:10.360 15:06:39 -- scripts/common.sh@365 -- # decimal 2 00:08:10.360 15:06:39 -- scripts/common.sh@352 -- # local d=2 00:08:10.360 15:06:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.360 15:06:39 -- scripts/common.sh@354 -- # echo 2 00:08:10.360 15:06:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:10.360 15:06:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:10.361 15:06:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:10.361 15:06:39 -- scripts/common.sh@367 -- # return 0 00:08:10.361 15:06:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.361 15:06:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:10.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.361 --rc genhtml_branch_coverage=1 00:08:10.361 --rc genhtml_function_coverage=1 00:08:10.361 --rc genhtml_legend=1 00:08:10.361 --rc geninfo_all_blocks=1 00:08:10.361 --rc geninfo_unexecuted_blocks=1 00:08:10.361 00:08:10.361 ' 00:08:10.361 15:06:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:10.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.361 --rc genhtml_branch_coverage=1 00:08:10.361 --rc genhtml_function_coverage=1 00:08:10.361 --rc genhtml_legend=1 00:08:10.361 --rc geninfo_all_blocks=1 00:08:10.361 --rc geninfo_unexecuted_blocks=1 00:08:10.361 00:08:10.361 ' 00:08:10.361 15:06:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:10.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.361 --rc genhtml_branch_coverage=1 00:08:10.361 --rc genhtml_function_coverage=1 00:08:10.361 --rc genhtml_legend=1 00:08:10.361 --rc geninfo_all_blocks=1 00:08:10.361 --rc geninfo_unexecuted_blocks=1 00:08:10.361 00:08:10.361 ' 00:08:10.361 15:06:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:10.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.361 --rc genhtml_branch_coverage=1 00:08:10.361 --rc genhtml_function_coverage=1 00:08:10.361 --rc genhtml_legend=1 00:08:10.361 --rc geninfo_all_blocks=1 00:08:10.361 --rc geninfo_unexecuted_blocks=1 00:08:10.361 00:08:10.361 ' 00:08:10.361 15:06:39 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:10.361 15:06:39 -- nvmf/common.sh@7 -- # uname -s 00:08:10.361 15:06:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.361 15:06:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.361 15:06:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.361 15:06:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.361 15:06:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.361 15:06:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.361 15:06:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.361 15:06:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.361 15:06:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.361 15:06:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.361 15:06:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:08:10.361 15:06:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:08:10.361 15:06:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.361 15:06:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.361 15:06:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:10.361 15:06:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.361 15:06:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.361 15:06:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.361 15:06:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.361 15:06:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.361 15:06:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.361 15:06:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.361 15:06:39 -- paths/export.sh@5 -- # export PATH 00:08:10.361 15:06:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.361 15:06:39 -- nvmf/common.sh@46 -- # : 0 00:08:10.361 15:06:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:10.361 15:06:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:10.361 15:06:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:10.361 15:06:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.361 15:06:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.361 15:06:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:10.361 15:06:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:10.361 15:06:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:10.361 15:06:39 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:10.361 15:06:39 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:10.361 15:06:39 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.361 15:06:39 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:10.361 15:06:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:10.361 15:06:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.361 15:06:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:10.361 15:06:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:10.361 15:06:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:10.361 15:06:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.361 15:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.361 15:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.361 15:06:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:10.361 15:06:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:10.361 15:06:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:10.361 15:06:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:10.361 15:06:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:10.361 15:06:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:10.361 15:06:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.361 15:06:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.361 15:06:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:10.361 15:06:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:10.361 15:06:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.361 15:06:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.361 15:06:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.361 15:06:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.361 15:06:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.361 15:06:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.361 15:06:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.361 15:06:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.361 15:06:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:10.361 15:06:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:10.361 Cannot find device "nvmf_tgt_br" 00:08:10.361 15:06:39 -- nvmf/common.sh@154 -- # true 00:08:10.361 15:06:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.361 Cannot find device "nvmf_tgt_br2" 00:08:10.361 15:06:39 -- nvmf/common.sh@155 -- # true 00:08:10.361 15:06:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:10.361 15:06:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:10.361 Cannot find device "nvmf_tgt_br" 00:08:10.361 15:06:39 -- nvmf/common.sh@157 -- # true 00:08:10.361 15:06:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:10.361 Cannot find device "nvmf_tgt_br2" 00:08:10.361 15:06:39 -- nvmf/common.sh@158 -- # true 00:08:10.361 15:06:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:10.361 15:06:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:10.620 15:06:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.620 15:06:39 -- nvmf/common.sh@161 -- # true 00:08:10.620 15:06:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.620 15:06:39 -- nvmf/common.sh@162 -- # true 00:08:10.620 15:06:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.620 15:06:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.620 15:06:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.620 15:06:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.620 15:06:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.620 15:06:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.620 15:06:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.620 15:06:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.620 15:06:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.620 15:06:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:10.620 15:06:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:10.620 15:06:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:10.620 15:06:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:10.620 15:06:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.620 15:06:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.620 15:06:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.620 15:06:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:10.620 15:06:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:10.620 15:06:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.620 15:06:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.620 15:06:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.620 15:06:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.620 15:06:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.620 15:06:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:10.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:10.620 00:08:10.620 --- 10.0.0.2 ping statistics --- 00:08:10.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.620 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:10.620 15:06:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:10.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:08:10.620 00:08:10.620 --- 10.0.0.3 ping statistics --- 00:08:10.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.620 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:10.620 15:06:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:10.620 00:08:10.620 --- 10.0.0.1 ping statistics --- 00:08:10.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.620 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:10.620 15:06:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.620 15:06:39 -- nvmf/common.sh@421 -- # return 0 00:08:10.620 15:06:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:10.620 15:06:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.620 15:06:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:10.620 15:06:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:10.620 15:06:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.620 15:06:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:10.620 15:06:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:10.620 15:06:39 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:10.620 15:06:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:10.620 15:06:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.620 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.620 15:06:39 -- nvmf/common.sh@469 -- # nvmfpid=61868 00:08:10.620 15:06:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:10.620 15:06:39 -- nvmf/common.sh@470 -- # waitforlisten 61868 00:08:10.620 15:06:39 -- common/autotest_common.sh@829 -- # '[' -z 61868 ']' 00:08:10.620 15:06:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.620 15:06:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.620 15:06:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.620 15:06:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.620 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.620 [2024-11-06 15:06:39.888509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.620 [2024-11-06 15:06:39.888592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.880 [2024-11-06 15:06:40.021319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.880 [2024-11-06 15:06:40.073966] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.880 [2024-11-06 15:06:40.074327] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.880 [2024-11-06 15:06:40.074379] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.880 [2024-11-06 15:06:40.074495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.880 [2024-11-06 15:06:40.074556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.816 15:06:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.816 15:06:40 -- common/autotest_common.sh@862 -- # return 0 00:08:11.816 15:06:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.816 15:06:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.816 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.816 15:06:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.816 15:06:40 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.816 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.816 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.816 [2024-11-06 15:06:40.914437] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.816 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.816 15:06:40 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:11.816 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.816 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.816 Malloc0 00:08:11.816 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.816 15:06:40 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:11.816 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.816 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.816 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.816 15:06:40 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:11.816 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.816 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.816 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.816 15:06:40 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.816 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.816 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.816 [2024-11-06 15:06:40.965751] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.816 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.816 15:06:40 -- target/queue_depth.sh@30 -- # bdevperf_pid=61900 00:08:11.816 15:06:40 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:11.817 15:06:40 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.817 15:06:40 -- target/queue_depth.sh@33 -- # waitforlisten 61900 /var/tmp/bdevperf.sock 00:08:11.817 15:06:40 -- common/autotest_common.sh@829 -- # '[' -z 61900 ']' 00:08:11.817 15:06:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.817 15:06:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.817 15:06:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.817 15:06:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.817 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.817 [2024-11-06 15:06:41.024930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.817 [2024-11-06 15:06:41.025013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61900 ] 00:08:12.076 [2024-11-06 15:06:41.167682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.076 [2024-11-06 15:06:41.240402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.014 15:06:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.014 15:06:41 -- common/autotest_common.sh@862 -- # return 0 00:08:13.014 15:06:41 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:13.014 15:06:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.014 15:06:41 -- common/autotest_common.sh@10 -- # set +x 00:08:13.014 NVMe0n1 00:08:13.014 15:06:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.015 15:06:42 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.015 Running I/O for 10 seconds... 00:08:22.994 00:08:22.994 Latency(us) 00:08:22.994 [2024-11-06T15:06:52.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.994 [2024-11-06T15:06:52.269Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:22.994 Verification LBA range: start 0x0 length 0x4000 00:08:22.994 NVMe0n1 : 10.08 13649.57 53.32 0.00 0.00 74676.52 14120.03 66250.94 00:08:22.994 [2024-11-06T15:06:52.269Z] =================================================================================================================== 00:08:22.994 [2024-11-06T15:06:52.269Z] Total : 13649.57 53.32 0.00 0.00 74676.52 14120.03 66250.94 00:08:22.994 0 00:08:23.253 15:06:52 -- target/queue_depth.sh@39 -- # killprocess 61900 00:08:23.253 15:06:52 -- common/autotest_common.sh@936 -- # '[' -z 61900 ']' 00:08:23.253 15:06:52 -- common/autotest_common.sh@940 -- # kill -0 61900 00:08:23.253 15:06:52 -- common/autotest_common.sh@941 -- # uname 00:08:23.253 15:06:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.253 15:06:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61900 00:08:23.253 15:06:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:23.253 15:06:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:23.253 killing process with pid 61900 00:08:23.253 15:06:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61900' 00:08:23.253 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.253 00:08:23.253 Latency(us) 00:08:23.253 [2024-11-06T15:06:52.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.253 [2024-11-06T15:06:52.528Z] =================================================================================================================== 00:08:23.253 [2024-11-06T15:06:52.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.253 15:06:52 -- common/autotest_common.sh@955 -- # kill 61900 00:08:23.253 15:06:52 -- common/autotest_common.sh@960 -- # wait 61900 00:08:23.253 15:06:52 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:23.253 15:06:52 -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:23.253 15:06:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:23.253 15:06:52 -- nvmf/common.sh@116 -- # sync 00:08:23.524 15:06:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:23.524 15:06:52 -- nvmf/common.sh@119 -- # set +e 00:08:23.524 15:06:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:23.524 15:06:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:23.524 rmmod nvme_tcp 00:08:23.524 rmmod nvme_fabrics 00:08:23.524 rmmod nvme_keyring 00:08:23.524 15:06:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:23.524 15:06:52 -- nvmf/common.sh@123 -- # set -e 00:08:23.524 15:06:52 -- nvmf/common.sh@124 -- # return 0 00:08:23.524 15:06:52 -- nvmf/common.sh@477 -- # '[' -n 61868 ']' 00:08:23.524 15:06:52 -- nvmf/common.sh@478 -- # killprocess 61868 00:08:23.524 15:06:52 -- common/autotest_common.sh@936 -- # '[' -z 61868 ']' 00:08:23.524 15:06:52 -- common/autotest_common.sh@940 -- # kill -0 61868 00:08:23.524 15:06:52 -- common/autotest_common.sh@941 -- # uname 00:08:23.524 15:06:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.524 15:06:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61868 00:08:23.524 15:06:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:23.524 15:06:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:23.524 killing process with pid 61868 00:08:23.524 15:06:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61868' 00:08:23.524 15:06:52 -- common/autotest_common.sh@955 -- # kill 61868 00:08:23.524 15:06:52 -- common/autotest_common.sh@960 -- # wait 61868 00:08:23.797 15:06:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:23.797 15:06:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:23.797 15:06:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:23.797 15:06:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.797 15:06:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:23.797 15:06:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.797 15:06:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.797 15:06:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.797 15:06:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:23.797 00:08:23.797 real 0m13.603s 00:08:23.797 user 0m23.686s 00:08:23.797 sys 0m1.969s 00:08:23.797 15:06:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.797 ************************************ 00:08:23.797 15:06:52 -- common/autotest_common.sh@10 -- # set +x 00:08:23.797 END TEST nvmf_queue_depth 00:08:23.797 ************************************ 00:08:23.797 15:06:52 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:23.797 15:06:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:23.797 15:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.797 15:06:52 -- common/autotest_common.sh@10 -- # set +x 00:08:23.797 ************************************ 00:08:23.797 START TEST nvmf_multipath 00:08:23.797 ************************************ 00:08:23.797 15:06:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:23.797 * Looking for test storage... 00:08:23.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:23.797 15:06:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:23.797 15:06:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:23.797 15:06:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:24.082 15:06:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:24.082 15:06:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:24.082 15:06:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:24.082 15:06:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:24.082 15:06:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:24.082 15:06:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:24.082 15:06:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.082 15:06:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:24.082 15:06:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:24.082 15:06:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:24.082 15:06:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:24.082 15:06:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:24.082 15:06:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:24.082 15:06:53 -- scripts/common.sh@344 -- # : 1 00:08:24.082 15:06:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:24.082 15:06:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.082 15:06:53 -- scripts/common.sh@364 -- # decimal 1 00:08:24.082 15:06:53 -- scripts/common.sh@352 -- # local d=1 00:08:24.082 15:06:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.082 15:06:53 -- scripts/common.sh@354 -- # echo 1 00:08:24.082 15:06:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:24.082 15:06:53 -- scripts/common.sh@365 -- # decimal 2 00:08:24.082 15:06:53 -- scripts/common.sh@352 -- # local d=2 00:08:24.082 15:06:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.082 15:06:53 -- scripts/common.sh@354 -- # echo 2 00:08:24.082 15:06:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:24.082 15:06:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:24.082 15:06:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:24.082 15:06:53 -- scripts/common.sh@367 -- # return 0 00:08:24.082 15:06:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.082 15:06:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:24.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.082 --rc genhtml_branch_coverage=1 00:08:24.082 --rc genhtml_function_coverage=1 00:08:24.082 --rc genhtml_legend=1 00:08:24.082 --rc geninfo_all_blocks=1 00:08:24.082 --rc geninfo_unexecuted_blocks=1 00:08:24.082 00:08:24.082 ' 00:08:24.082 15:06:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:24.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.082 --rc genhtml_branch_coverage=1 00:08:24.082 --rc genhtml_function_coverage=1 00:08:24.082 --rc genhtml_legend=1 00:08:24.082 --rc geninfo_all_blocks=1 00:08:24.082 --rc geninfo_unexecuted_blocks=1 00:08:24.082 00:08:24.082 ' 00:08:24.082 15:06:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:24.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.082 --rc genhtml_branch_coverage=1 00:08:24.082 --rc genhtml_function_coverage=1 00:08:24.083 --rc genhtml_legend=1 00:08:24.083 --rc geninfo_all_blocks=1 00:08:24.083 --rc geninfo_unexecuted_blocks=1 00:08:24.083 00:08:24.083 ' 00:08:24.083 15:06:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:24.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.083 --rc genhtml_branch_coverage=1 00:08:24.083 --rc genhtml_function_coverage=1 00:08:24.083 --rc genhtml_legend=1 00:08:24.083 --rc geninfo_all_blocks=1 00:08:24.083 --rc geninfo_unexecuted_blocks=1 00:08:24.083 00:08:24.083 ' 00:08:24.083 15:06:53 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:24.083 15:06:53 -- nvmf/common.sh@7 -- # uname -s 00:08:24.083 15:06:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.083 15:06:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.083 15:06:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.083 15:06:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.083 15:06:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.083 15:06:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.083 15:06:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.083 15:06:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.083 15:06:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.083 15:06:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.083 15:06:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:08:24.083 15:06:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:08:24.083 15:06:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.083 15:06:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.083 15:06:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:24.083 15:06:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.083 15:06:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.083 15:06:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.083 15:06:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.083 15:06:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.083 15:06:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.083 15:06:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.083 15:06:53 -- paths/export.sh@5 -- # export PATH 00:08:24.083 15:06:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.083 15:06:53 -- nvmf/common.sh@46 -- # : 0 00:08:24.083 15:06:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:24.083 15:06:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:24.083 15:06:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:24.083 15:06:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.083 15:06:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.083 15:06:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:24.083 15:06:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:24.083 15:06:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:24.083 15:06:53 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.083 15:06:53 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.083 15:06:53 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:24.083 15:06:53 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.083 15:06:53 -- target/multipath.sh@43 -- # nvmftestinit 00:08:24.083 15:06:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:24.083 15:06:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.083 15:06:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:24.083 15:06:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:24.083 15:06:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:24.083 15:06:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.083 15:06:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.083 15:06:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.083 15:06:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:24.083 15:06:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:24.083 15:06:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:24.083 15:06:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:24.083 15:06:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:24.083 15:06:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:24.083 15:06:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.083 15:06:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.083 15:06:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:24.083 15:06:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:24.083 15:06:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:24.083 15:06:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:24.083 15:06:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:24.083 15:06:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.083 15:06:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:24.083 15:06:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:24.083 15:06:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:24.083 15:06:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:24.083 15:06:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:24.083 15:06:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:24.083 Cannot find device "nvmf_tgt_br" 00:08:24.083 15:06:53 -- nvmf/common.sh@154 -- # true 00:08:24.083 15:06:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:24.083 Cannot find device "nvmf_tgt_br2" 00:08:24.083 15:06:53 -- nvmf/common.sh@155 -- # true 00:08:24.083 15:06:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:24.083 15:06:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:24.083 Cannot find device "nvmf_tgt_br" 00:08:24.083 15:06:53 -- nvmf/common.sh@157 -- # true 00:08:24.083 15:06:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:24.083 Cannot find device "nvmf_tgt_br2" 00:08:24.083 15:06:53 -- nvmf/common.sh@158 -- # true 00:08:24.083 15:06:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:24.083 15:06:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:24.083 15:06:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:24.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.083 15:06:53 -- nvmf/common.sh@161 -- # true 00:08:24.083 15:06:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:24.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.084 15:06:53 -- nvmf/common.sh@162 -- # true 00:08:24.084 15:06:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:24.084 15:06:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:24.084 15:06:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:24.084 15:06:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:24.084 15:06:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:24.084 15:06:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:24.343 15:06:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:24.343 15:06:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:24.343 15:06:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:24.343 15:06:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:24.343 15:06:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:24.343 15:06:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:24.343 15:06:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:24.343 15:06:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:24.343 15:06:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:24.343 15:06:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:24.343 15:06:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:24.343 15:06:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:24.343 15:06:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:24.343 15:06:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:24.343 15:06:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:24.343 15:06:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:24.343 15:06:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:24.343 15:06:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:24.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:24.343 00:08:24.343 --- 10.0.0.2 ping statistics --- 00:08:24.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.343 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:24.343 15:06:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:24.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:24.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:24.343 00:08:24.343 --- 10.0.0.3 ping statistics --- 00:08:24.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.343 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:24.343 15:06:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:24.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:24.343 00:08:24.343 --- 10.0.0.1 ping statistics --- 00:08:24.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.343 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:24.343 15:06:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.343 15:06:53 -- nvmf/common.sh@421 -- # return 0 00:08:24.343 15:06:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:24.343 15:06:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.343 15:06:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:24.343 15:06:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:24.343 15:06:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.343 15:06:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:24.343 15:06:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:24.343 15:06:53 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:24.343 15:06:53 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:24.343 15:06:53 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:24.343 15:06:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:24.343 15:06:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.343 15:06:53 -- common/autotest_common.sh@10 -- # set +x 00:08:24.343 15:06:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.343 15:06:53 -- nvmf/common.sh@469 -- # nvmfpid=62227 00:08:24.343 15:06:53 -- nvmf/common.sh@470 -- # waitforlisten 62227 00:08:24.343 15:06:53 -- common/autotest_common.sh@829 -- # '[' -z 62227 ']' 00:08:24.343 15:06:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.343 15:06:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.343 15:06:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.343 15:06:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.343 15:06:53 -- common/autotest_common.sh@10 -- # set +x 00:08:24.343 [2024-11-06 15:06:53.577167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.343 [2024-11-06 15:06:53.577263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.602 [2024-11-06 15:06:53.721020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.602 [2024-11-06 15:06:53.794794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.602 [2024-11-06 15:06:53.794961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.602 [2024-11-06 15:06:53.794978] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.602 [2024-11-06 15:06:53.794989] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.602 [2024-11-06 15:06:53.795061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.602 [2024-11-06 15:06:53.795371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.602 [2024-11-06 15:06:53.795509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.602 [2024-11-06 15:06:53.795516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.538 15:06:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.538 15:06:54 -- common/autotest_common.sh@862 -- # return 0 00:08:25.538 15:06:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:25.538 15:06:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.538 15:06:54 -- common/autotest_common.sh@10 -- # set +x 00:08:25.538 15:06:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.538 15:06:54 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:25.797 [2024-11-06 15:06:54.891593] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.797 15:06:54 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:26.056 Malloc0 00:08:26.056 15:06:55 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:26.315 15:06:55 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.574 15:06:55 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.833 [2024-11-06 15:06:55.965435] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.833 15:06:55 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:27.091 [2024-11-06 15:06:56.221774] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:27.091 15:06:56 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:27.350 15:06:56 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:27.350 15:06:56 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.350 15:06:56 -- common/autotest_common.sh@1187 -- # local i=0 00:08:27.350 15:06:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.350 15:06:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:27.350 15:06:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:29.255 15:06:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:29.255 15:06:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:29.255 15:06:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.514 15:06:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:29.514 15:06:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.514 15:06:58 -- common/autotest_common.sh@1197 -- # return 0 00:08:29.514 15:06:58 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:29.514 15:06:58 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:29.514 15:06:58 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:29.514 15:06:58 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:29.514 15:06:58 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:29.514 15:06:58 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:29.514 15:06:58 -- target/multipath.sh@38 -- # return 0 00:08:29.514 15:06:58 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:29.514 15:06:58 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:29.514 15:06:58 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:29.514 15:06:58 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:29.514 15:06:58 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:29.514 15:06:58 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:29.514 15:06:58 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:29.514 15:06:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:29.514 15:06:58 -- target/multipath.sh@22 -- # local timeout=20 00:08:29.514 15:06:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:29.514 15:06:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:29.514 15:06:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:29.514 15:06:58 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:29.514 15:06:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:29.514 15:06:58 -- target/multipath.sh@22 -- # local timeout=20 00:08:29.514 15:06:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:29.514 15:06:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:29.514 15:06:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:29.514 15:06:58 -- target/multipath.sh@85 -- # echo numa 00:08:29.514 15:06:58 -- target/multipath.sh@88 -- # fio_pid=62322 00:08:29.514 15:06:58 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:29.514 15:06:58 -- target/multipath.sh@90 -- # sleep 1 00:08:29.514 [global] 00:08:29.514 thread=1 00:08:29.514 invalidate=1 00:08:29.514 rw=randrw 00:08:29.514 time_based=1 00:08:29.514 runtime=6 00:08:29.514 ioengine=libaio 00:08:29.514 direct=1 00:08:29.514 bs=4096 00:08:29.514 iodepth=128 00:08:29.514 norandommap=0 00:08:29.514 numjobs=1 00:08:29.514 00:08:29.514 verify_dump=1 00:08:29.514 verify_backlog=512 00:08:29.514 verify_state_save=0 00:08:29.514 do_verify=1 00:08:29.514 verify=crc32c-intel 00:08:29.514 [job0] 00:08:29.514 filename=/dev/nvme0n1 00:08:29.514 Could not set queue depth (nvme0n1) 00:08:29.514 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:29.514 fio-3.35 00:08:29.514 Starting 1 thread 00:08:30.451 15:06:59 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:30.710 15:06:59 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:30.968 15:07:00 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:30.968 15:07:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:30.968 15:07:00 -- target/multipath.sh@22 -- # local timeout=20 00:08:30.968 15:07:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:30.968 15:07:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:30.968 15:07:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:30.968 15:07:00 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:30.968 15:07:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:30.968 15:07:00 -- target/multipath.sh@22 -- # local timeout=20 00:08:30.968 15:07:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:30.968 15:07:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:30.968 15:07:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:30.968 15:07:00 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:31.227 15:07:00 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:31.485 15:07:00 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:31.485 15:07:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:31.485 15:07:00 -- target/multipath.sh@22 -- # local timeout=20 00:08:31.485 15:07:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:31.485 15:07:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:31.485 15:07:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:31.485 15:07:00 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:31.485 15:07:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:31.485 15:07:00 -- target/multipath.sh@22 -- # local timeout=20 00:08:31.485 15:07:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:31.485 15:07:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:31.485 15:07:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:31.485 15:07:00 -- target/multipath.sh@104 -- # wait 62322 00:08:35.674 00:08:35.674 job0: (groupid=0, jobs=1): err= 0: pid=62343: Wed Nov 6 15:07:04 2024 00:08:35.674 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(261MiB/6007msec) 00:08:35.674 slat (usec): min=4, max=5333, avg=51.77, stdev=216.11 00:08:35.674 clat (usec): min=1321, max=14269, avg=7762.39, stdev=1334.05 00:08:35.674 lat (usec): min=1330, max=14280, avg=7814.16, stdev=1338.12 00:08:35.674 clat percentiles (usec): 00:08:35.674 | 1.00th=[ 4146], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 6980], 00:08:35.674 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7898], 00:08:35.674 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10552], 00:08:35.674 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12780], 99.95th=[12911], 00:08:35.674 | 99.99th=[14091] 00:08:35.674 bw ( KiB/s): min= 9232, max=30552, per=52.16%, avg=23200.36, stdev=6838.61, samples=11 00:08:35.674 iops : min= 2308, max= 7638, avg=5800.00, stdev=1709.57, samples=11 00:08:35.674 write: IOPS=6667, BW=26.0MiB/s (27.3MB/s)(138MiB/5295msec); 0 zone resets 00:08:35.674 slat (usec): min=15, max=2077, avg=62.48, stdev=149.15 00:08:35.674 clat (usec): min=1785, max=13425, avg=6897.29, stdev=1205.23 00:08:35.674 lat (usec): min=1809, max=13457, avg=6959.78, stdev=1210.72 00:08:35.674 clat percentiles (usec): 00:08:35.674 | 1.00th=[ 3228], 5.00th=[ 4228], 10.00th=[ 5473], 20.00th=[ 6325], 00:08:35.674 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7242], 00:08:35.674 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 8029], 95.00th=[ 8291], 00:08:35.674 | 99.00th=[10290], 99.50th=[10945], 99.90th=[11863], 99.95th=[12256], 00:08:35.674 | 99.99th=[13304] 00:08:35.674 bw ( KiB/s): min= 9464, max=29920, per=87.22%, avg=23261.55, stdev=6549.91, samples=11 00:08:35.674 iops : min= 2366, max= 7480, avg=5815.27, stdev=1637.39, samples=11 00:08:35.674 lat (msec) : 2=0.03%, 4=1.92%, 10=93.54%, 20=4.51% 00:08:35.674 cpu : usr=5.94%, sys=22.33%, ctx=5956, majf=0, minf=108 00:08:35.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:35.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.674 issued rwts: total=66798,35303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.674 00:08:35.674 Run status group 0 (all jobs): 00:08:35.674 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=261MiB (274MB), run=6007-6007msec 00:08:35.674 WRITE: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=138MiB (145MB), run=5295-5295msec 00:08:35.674 00:08:35.674 Disk stats (read/write): 00:08:35.674 nvme0n1: ios=65890/34680, merge=0/0, ticks=487834/223655, in_queue=711489, util=98.66% 00:08:35.674 15:07:04 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:36.242 15:07:05 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:36.242 15:07:05 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:36.242 15:07:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:36.242 15:07:05 -- target/multipath.sh@22 -- # local timeout=20 00:08:36.242 15:07:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:36.242 15:07:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:36.242 15:07:05 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:36.242 15:07:05 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:36.242 15:07:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:36.242 15:07:05 -- target/multipath.sh@22 -- # local timeout=20 00:08:36.242 15:07:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:36.242 15:07:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:36.242 15:07:05 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:36.242 15:07:05 -- target/multipath.sh@113 -- # echo round-robin 00:08:36.242 15:07:05 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:36.242 15:07:05 -- target/multipath.sh@116 -- # fio_pid=62425 00:08:36.242 15:07:05 -- target/multipath.sh@118 -- # sleep 1 00:08:36.242 [global] 00:08:36.242 thread=1 00:08:36.242 invalidate=1 00:08:36.242 rw=randrw 00:08:36.242 time_based=1 00:08:36.242 runtime=6 00:08:36.242 ioengine=libaio 00:08:36.242 direct=1 00:08:36.242 bs=4096 00:08:36.242 iodepth=128 00:08:36.242 norandommap=0 00:08:36.242 numjobs=1 00:08:36.242 00:08:36.242 verify_dump=1 00:08:36.242 verify_backlog=512 00:08:36.242 verify_state_save=0 00:08:36.242 do_verify=1 00:08:36.243 verify=crc32c-intel 00:08:36.502 [job0] 00:08:36.502 filename=/dev/nvme0n1 00:08:36.502 Could not set queue depth (nvme0n1) 00:08:36.502 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.502 fio-3.35 00:08:36.502 Starting 1 thread 00:08:37.459 15:07:06 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:37.727 15:07:06 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:37.986 15:07:07 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:37.986 15:07:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:37.986 15:07:07 -- target/multipath.sh@22 -- # local timeout=20 00:08:37.986 15:07:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.986 15:07:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.986 15:07:07 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:37.986 15:07:07 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:37.986 15:07:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:37.986 15:07:07 -- target/multipath.sh@22 -- # local timeout=20 00:08:37.986 15:07:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.986 15:07:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.986 15:07:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:37.986 15:07:07 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:38.244 15:07:07 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:38.502 15:07:07 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:38.502 15:07:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:38.502 15:07:07 -- target/multipath.sh@22 -- # local timeout=20 00:08:38.502 15:07:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:38.502 15:07:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:38.502 15:07:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:38.502 15:07:07 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:38.502 15:07:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:38.502 15:07:07 -- target/multipath.sh@22 -- # local timeout=20 00:08:38.502 15:07:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:38.502 15:07:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:38.502 15:07:07 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:38.502 15:07:07 -- target/multipath.sh@132 -- # wait 62425 00:08:42.692 00:08:42.692 job0: (groupid=0, jobs=1): err= 0: pid=62446: Wed Nov 6 15:07:11 2024 00:08:42.692 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(280MiB/6002msec) 00:08:42.692 slat (usec): min=2, max=5992, avg=41.53, stdev=191.24 00:08:42.692 clat (usec): min=941, max=14315, avg=7304.82, stdev=1725.51 00:08:42.692 lat (usec): min=1280, max=14325, avg=7346.35, stdev=1739.53 00:08:42.692 clat percentiles (usec): 00:08:42.692 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5866], 00:08:42.692 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7767], 00:08:42.692 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10028], 00:08:42.692 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13435], 99.95th=[13566], 00:08:42.692 | 99.99th=[13960] 00:08:42.692 bw ( KiB/s): min= 9064, max=36368, per=54.25%, avg=25962.91, stdev=7419.75, samples=11 00:08:42.692 iops : min= 2266, max= 9092, avg=6490.73, stdev=1854.94, samples=11 00:08:42.692 write: IOPS=7070, BW=27.6MiB/s (29.0MB/s)(148MiB/5356msec); 0 zone resets 00:08:42.692 slat (usec): min=4, max=2007, avg=53.73, stdev=131.66 00:08:42.692 clat (usec): min=1797, max=13843, avg=6267.36, stdev=1698.34 00:08:42.692 lat (usec): min=1820, max=13872, avg=6321.09, stdev=1713.04 00:08:42.692 clat percentiles (usec): 00:08:42.692 | 1.00th=[ 2737], 5.00th=[ 3326], 10.00th=[ 3720], 20.00th=[ 4359], 00:08:42.692 | 30.00th=[ 5211], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7111], 00:08:42.692 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8291], 00:08:42.692 | 99.00th=[10028], 99.50th=[11076], 99.90th=[12256], 99.95th=[12649], 00:08:42.692 | 99.99th=[13698] 00:08:42.692 bw ( KiB/s): min= 9576, max=36864, per=91.69%, avg=25932.55, stdev=7237.98, samples=11 00:08:42.692 iops : min= 2394, max= 9216, avg=6483.09, stdev=1809.50, samples=11 00:08:42.692 lat (usec) : 1000=0.01% 00:08:42.692 lat (msec) : 2=0.10%, 4=7.09%, 10=89.13%, 20=3.68% 00:08:42.692 cpu : usr=6.17%, sys=23.36%, ctx=5885, majf=0, minf=102 00:08:42.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:42.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.692 issued rwts: total=71803,37872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.692 00:08:42.692 Run status group 0 (all jobs): 00:08:42.692 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=280MiB (294MB), run=6002-6002msec 00:08:42.692 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=148MiB (155MB), run=5356-5356msec 00:08:42.692 00:08:42.692 Disk stats (read/write): 00:08:42.692 nvme0n1: ios=70205/37872, merge=0/0, ticks=487168/220420, in_queue=707588, util=98.53% 00:08:42.692 15:07:11 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:42.692 15:07:11 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.692 15:07:11 -- common/autotest_common.sh@1208 -- # local i=0 00:08:42.692 15:07:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:42.692 15:07:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.692 15:07:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.692 15:07:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:42.692 15:07:11 -- common/autotest_common.sh@1220 -- # return 0 00:08:42.692 15:07:11 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.951 15:07:12 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:42.951 15:07:12 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:42.951 15:07:12 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:42.951 15:07:12 -- target/multipath.sh@144 -- # nvmftestfini 00:08:42.951 15:07:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.951 15:07:12 -- nvmf/common.sh@116 -- # sync 00:08:42.951 15:07:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.951 15:07:12 -- nvmf/common.sh@119 -- # set +e 00:08:42.951 15:07:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.951 15:07:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.951 rmmod nvme_tcp 00:08:42.951 rmmod nvme_fabrics 00:08:42.951 rmmod nvme_keyring 00:08:43.210 15:07:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:43.210 15:07:12 -- nvmf/common.sh@123 -- # set -e 00:08:43.210 15:07:12 -- nvmf/common.sh@124 -- # return 0 00:08:43.210 15:07:12 -- nvmf/common.sh@477 -- # '[' -n 62227 ']' 00:08:43.210 15:07:12 -- nvmf/common.sh@478 -- # killprocess 62227 00:08:43.210 15:07:12 -- common/autotest_common.sh@936 -- # '[' -z 62227 ']' 00:08:43.210 15:07:12 -- common/autotest_common.sh@940 -- # kill -0 62227 00:08:43.210 15:07:12 -- common/autotest_common.sh@941 -- # uname 00:08:43.210 15:07:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:43.210 15:07:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62227 00:08:43.210 killing process with pid 62227 00:08:43.210 15:07:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:43.210 15:07:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:43.210 15:07:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62227' 00:08:43.210 15:07:12 -- common/autotest_common.sh@955 -- # kill 62227 00:08:43.210 15:07:12 -- common/autotest_common.sh@960 -- # wait 62227 00:08:43.210 15:07:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:43.210 15:07:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:43.210 15:07:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:43.210 15:07:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.210 15:07:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:43.210 15:07:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.210 15:07:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.210 15:07:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.469 15:07:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:43.469 ************************************ 00:08:43.469 END TEST nvmf_multipath 00:08:43.469 ************************************ 00:08:43.469 00:08:43.469 real 0m19.521s 00:08:43.469 user 1m13.280s 00:08:43.469 sys 0m10.042s 00:08:43.470 15:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.470 15:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:43.470 15:07:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.470 15:07:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.470 15:07:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.470 15:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:43.470 ************************************ 00:08:43.470 START TEST nvmf_zcopy 00:08:43.470 ************************************ 00:08:43.470 15:07:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.470 * Looking for test storage... 00:08:43.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.470 15:07:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:43.470 15:07:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:43.470 15:07:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:43.470 15:07:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:43.470 15:07:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:43.470 15:07:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:43.470 15:07:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:43.470 15:07:12 -- scripts/common.sh@335 -- # IFS=.-: 00:08:43.470 15:07:12 -- scripts/common.sh@335 -- # read -ra ver1 00:08:43.470 15:07:12 -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.470 15:07:12 -- scripts/common.sh@336 -- # read -ra ver2 00:08:43.470 15:07:12 -- scripts/common.sh@337 -- # local 'op=<' 00:08:43.470 15:07:12 -- scripts/common.sh@339 -- # ver1_l=2 00:08:43.470 15:07:12 -- scripts/common.sh@340 -- # ver2_l=1 00:08:43.470 15:07:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:43.470 15:07:12 -- scripts/common.sh@343 -- # case "$op" in 00:08:43.470 15:07:12 -- scripts/common.sh@344 -- # : 1 00:08:43.470 15:07:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:43.470 15:07:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.470 15:07:12 -- scripts/common.sh@364 -- # decimal 1 00:08:43.470 15:07:12 -- scripts/common.sh@352 -- # local d=1 00:08:43.470 15:07:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.470 15:07:12 -- scripts/common.sh@354 -- # echo 1 00:08:43.470 15:07:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:43.470 15:07:12 -- scripts/common.sh@365 -- # decimal 2 00:08:43.470 15:07:12 -- scripts/common.sh@352 -- # local d=2 00:08:43.470 15:07:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.470 15:07:12 -- scripts/common.sh@354 -- # echo 2 00:08:43.470 15:07:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:43.470 15:07:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:43.470 15:07:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:43.470 15:07:12 -- scripts/common.sh@367 -- # return 0 00:08:43.470 15:07:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.470 15:07:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:43.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.470 --rc genhtml_branch_coverage=1 00:08:43.470 --rc genhtml_function_coverage=1 00:08:43.470 --rc genhtml_legend=1 00:08:43.470 --rc geninfo_all_blocks=1 00:08:43.470 --rc geninfo_unexecuted_blocks=1 00:08:43.470 00:08:43.470 ' 00:08:43.470 15:07:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:43.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.470 --rc genhtml_branch_coverage=1 00:08:43.470 --rc genhtml_function_coverage=1 00:08:43.470 --rc genhtml_legend=1 00:08:43.470 --rc geninfo_all_blocks=1 00:08:43.470 --rc geninfo_unexecuted_blocks=1 00:08:43.470 00:08:43.470 ' 00:08:43.470 15:07:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:43.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.470 --rc genhtml_branch_coverage=1 00:08:43.470 --rc genhtml_function_coverage=1 00:08:43.470 --rc genhtml_legend=1 00:08:43.470 --rc geninfo_all_blocks=1 00:08:43.470 --rc geninfo_unexecuted_blocks=1 00:08:43.470 00:08:43.470 ' 00:08:43.470 15:07:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:43.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.470 --rc genhtml_branch_coverage=1 00:08:43.470 --rc genhtml_function_coverage=1 00:08:43.470 --rc genhtml_legend=1 00:08:43.470 --rc geninfo_all_blocks=1 00:08:43.470 --rc geninfo_unexecuted_blocks=1 00:08:43.470 00:08:43.470 ' 00:08:43.470 15:07:12 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.729 15:07:12 -- nvmf/common.sh@7 -- # uname -s 00:08:43.729 15:07:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.729 15:07:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.729 15:07:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.729 15:07:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.729 15:07:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.729 15:07:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.729 15:07:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.729 15:07:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.729 15:07:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.729 15:07:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.729 15:07:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:08:43.729 15:07:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:08:43.729 15:07:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.729 15:07:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.729 15:07:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.729 15:07:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.729 15:07:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.729 15:07:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.729 15:07:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.729 15:07:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.729 15:07:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.730 15:07:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.730 15:07:12 -- paths/export.sh@5 -- # export PATH 00:08:43.730 15:07:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.730 15:07:12 -- nvmf/common.sh@46 -- # : 0 00:08:43.730 15:07:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:43.730 15:07:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:43.730 15:07:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:43.730 15:07:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.730 15:07:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.730 15:07:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:43.730 15:07:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:43.730 15:07:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:43.730 15:07:12 -- target/zcopy.sh@12 -- # nvmftestinit 00:08:43.730 15:07:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:43.730 15:07:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.730 15:07:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:43.730 15:07:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:43.730 15:07:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:43.730 15:07:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.730 15:07:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.730 15:07:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.730 15:07:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:43.730 15:07:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:43.730 15:07:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:43.730 15:07:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:43.730 15:07:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:43.730 15:07:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:43.730 15:07:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.730 15:07:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.730 15:07:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:43.730 15:07:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:43.730 15:07:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.730 15:07:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.730 15:07:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.730 15:07:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.730 15:07:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.730 15:07:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.730 15:07:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.730 15:07:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.730 15:07:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:43.730 15:07:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:43.730 Cannot find device "nvmf_tgt_br" 00:08:43.730 15:07:12 -- nvmf/common.sh@154 -- # true 00:08:43.730 15:07:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.730 Cannot find device "nvmf_tgt_br2" 00:08:43.730 15:07:12 -- nvmf/common.sh@155 -- # true 00:08:43.730 15:07:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:43.730 15:07:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:43.730 Cannot find device "nvmf_tgt_br" 00:08:43.730 15:07:12 -- nvmf/common.sh@157 -- # true 00:08:43.730 15:07:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:43.730 Cannot find device "nvmf_tgt_br2" 00:08:43.730 15:07:12 -- nvmf/common.sh@158 -- # true 00:08:43.730 15:07:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:43.730 15:07:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:43.730 15:07:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.730 15:07:12 -- nvmf/common.sh@161 -- # true 00:08:43.730 15:07:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.730 15:07:12 -- nvmf/common.sh@162 -- # true 00:08:43.730 15:07:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.730 15:07:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.730 15:07:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.730 15:07:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.730 15:07:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.730 15:07:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.730 15:07:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.730 15:07:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.989 15:07:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.989 15:07:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:43.989 15:07:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:43.989 15:07:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:43.989 15:07:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:43.989 15:07:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.989 15:07:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.989 15:07:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.989 15:07:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:43.989 15:07:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:43.989 15:07:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.989 15:07:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.989 15:07:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.989 15:07:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.989 15:07:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.989 15:07:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:43.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:43.989 00:08:43.989 --- 10.0.0.2 ping statistics --- 00:08:43.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.989 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:43.989 15:07:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:43.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:43.989 00:08:43.989 --- 10.0.0.3 ping statistics --- 00:08:43.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.989 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:43.989 15:07:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:43.989 00:08:43.989 --- 10.0.0.1 ping statistics --- 00:08:43.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.989 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:43.989 15:07:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.989 15:07:13 -- nvmf/common.sh@421 -- # return 0 00:08:43.989 15:07:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:43.989 15:07:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.989 15:07:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:43.989 15:07:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:43.989 15:07:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.989 15:07:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:43.989 15:07:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.989 15:07:13 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:43.989 15:07:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.989 15:07:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.989 15:07:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.989 15:07:13 -- nvmf/common.sh@469 -- # nvmfpid=62698 00:08:43.989 15:07:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:43.989 15:07:13 -- nvmf/common.sh@470 -- # waitforlisten 62698 00:08:43.989 15:07:13 -- common/autotest_common.sh@829 -- # '[' -z 62698 ']' 00:08:43.989 15:07:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.989 15:07:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.989 15:07:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.989 15:07:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.989 15:07:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.989 [2024-11-06 15:07:13.199084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.989 [2024-11-06 15:07:13.199182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.248 [2024-11-06 15:07:13.337116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.248 [2024-11-06 15:07:13.389161] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.248 [2024-11-06 15:07:13.389540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.248 [2024-11-06 15:07:13.389562] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.248 [2024-11-06 15:07:13.389571] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.248 [2024-11-06 15:07:13.389603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.184 15:07:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.184 15:07:14 -- common/autotest_common.sh@862 -- # return 0 00:08:45.184 15:07:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:45.184 15:07:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.184 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.184 15:07:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.184 15:07:14 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:45.185 15:07:14 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:45.185 15:07:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.185 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 [2024-11-06 15:07:14.229150] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.185 15:07:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.185 15:07:14 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.185 15:07:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.185 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 15:07:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.185 15:07:14 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.185 15:07:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.185 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 [2024-11-06 15:07:14.249208] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.185 15:07:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.185 15:07:14 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.185 15:07:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.185 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 15:07:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.185 15:07:14 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:45.185 15:07:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.185 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 malloc0 00:08:45.185 15:07:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.185 15:07:14 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:45.185 15:07:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.185 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 15:07:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.185 15:07:14 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:45.185 15:07:14 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:45.185 15:07:14 -- nvmf/common.sh@520 -- # config=() 00:08:45.185 15:07:14 -- nvmf/common.sh@520 -- # local subsystem config 00:08:45.185 15:07:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:45.185 15:07:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:45.185 { 00:08:45.185 "params": { 00:08:45.185 "name": "Nvme$subsystem", 00:08:45.185 "trtype": "$TEST_TRANSPORT", 00:08:45.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.185 "adrfam": "ipv4", 00:08:45.185 "trsvcid": "$NVMF_PORT", 00:08:45.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.185 "hdgst": ${hdgst:-false}, 00:08:45.185 "ddgst": ${ddgst:-false} 00:08:45.185 }, 00:08:45.185 "method": "bdev_nvme_attach_controller" 00:08:45.185 } 00:08:45.185 EOF 00:08:45.185 )") 00:08:45.185 15:07:14 -- nvmf/common.sh@542 -- # cat 00:08:45.185 15:07:14 -- nvmf/common.sh@544 -- # jq . 00:08:45.185 15:07:14 -- nvmf/common.sh@545 -- # IFS=, 00:08:45.185 15:07:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:45.185 "params": { 00:08:45.185 "name": "Nvme1", 00:08:45.185 "trtype": "tcp", 00:08:45.185 "traddr": "10.0.0.2", 00:08:45.185 "adrfam": "ipv4", 00:08:45.185 "trsvcid": "4420", 00:08:45.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.185 "hdgst": false, 00:08:45.185 "ddgst": false 00:08:45.185 }, 00:08:45.185 "method": "bdev_nvme_attach_controller" 00:08:45.185 }' 00:08:45.185 [2024-11-06 15:07:14.327947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.185 [2024-11-06 15:07:14.328034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62731 ] 00:08:45.444 [2024-11-06 15:07:14.463284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.444 [2024-11-06 15:07:14.531765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.444 Running I/O for 10 seconds... 00:08:55.422 00:08:55.422 Latency(us) 00:08:55.422 [2024-11-06T15:07:24.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.422 [2024-11-06T15:07:24.697Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:55.422 Verification LBA range: start 0x0 length 0x1000 00:08:55.423 Nvme1n1 : 10.01 9859.62 77.03 0.00 0.00 12949.28 1012.83 20256.58 00:08:55.423 [2024-11-06T15:07:24.698Z] =================================================================================================================== 00:08:55.423 [2024-11-06T15:07:24.698Z] Total : 9859.62 77.03 0.00 0.00 12949.28 1012.83 20256.58 00:08:55.682 15:07:24 -- target/zcopy.sh@39 -- # perfpid=62854 00:08:55.682 15:07:24 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:55.682 15:07:24 -- target/zcopy.sh@41 -- # xtrace_disable 00:08:55.682 15:07:24 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:55.682 15:07:24 -- common/autotest_common.sh@10 -- # set +x 00:08:55.682 15:07:24 -- nvmf/common.sh@520 -- # config=() 00:08:55.682 15:07:24 -- nvmf/common.sh@520 -- # local subsystem config 00:08:55.682 15:07:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:55.682 15:07:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:55.682 { 00:08:55.682 "params": { 00:08:55.682 "name": "Nvme$subsystem", 00:08:55.682 "trtype": "$TEST_TRANSPORT", 00:08:55.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.682 "adrfam": "ipv4", 00:08:55.682 "trsvcid": "$NVMF_PORT", 00:08:55.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.682 "hdgst": ${hdgst:-false}, 00:08:55.682 "ddgst": ${ddgst:-false} 00:08:55.682 }, 00:08:55.682 "method": "bdev_nvme_attach_controller" 00:08:55.682 } 00:08:55.682 EOF 00:08:55.682 )") 00:08:55.682 15:07:24 -- nvmf/common.sh@542 -- # cat 00:08:55.682 [2024-11-06 15:07:24.871836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.871899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 15:07:24 -- nvmf/common.sh@544 -- # jq . 00:08:55.682 15:07:24 -- nvmf/common.sh@545 -- # IFS=, 00:08:55.682 15:07:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:55.682 "params": { 00:08:55.682 "name": "Nvme1", 00:08:55.682 "trtype": "tcp", 00:08:55.682 "traddr": "10.0.0.2", 00:08:55.682 "adrfam": "ipv4", 00:08:55.682 "trsvcid": "4420", 00:08:55.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.682 "hdgst": false, 00:08:55.682 "ddgst": false 00:08:55.682 }, 00:08:55.682 "method": "bdev_nvme_attach_controller" 00:08:55.682 }' 00:08:55.682 [2024-11-06 15:07:24.883804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.883850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-06 15:07:24.891801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.891845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-06 15:07:24.903803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.903845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-06 15:07:24.905338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:55.682 [2024-11-06 15:07:24.905418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:08:55.682 [2024-11-06 15:07:24.915824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.915867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-06 15:07:24.927808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.927850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-06 15:07:24.939814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.939856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-06 15:07:24.951807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-06 15:07:24.951848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-06 15:07:24.963822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-06 15:07:24.963863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-06 15:07:24.975846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-06 15:07:24.975890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-06 15:07:24.987847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-06 15:07:24.987889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-06 15:07:24.999852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-06 15:07:24.999893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-06 15:07:25.011863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.011905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.023869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.023912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.035870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.035911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.039323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.942 [2024-11-06 15:07:25.047906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.047956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.059896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.059940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.071918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.071970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.083907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.083951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.091482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.942 [2024-11-06 15:07:25.095899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.095941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.107932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.107971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.119939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.119996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.131936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.131976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.143994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.144030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.155950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.155998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.167953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.167999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.179963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.180009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.191976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.192021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.203996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.204025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.942 [2024-11-06 15:07:25.216014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.942 [2024-11-06 15:07:25.216092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 Running I/O for 5 seconds... 00:08:56.201 [2024-11-06 15:07:25.228023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-06 15:07:25.228080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-06 15:07:25.244182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-06 15:07:25.244233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-06 15:07:25.260761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.260791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.275222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.275270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.290794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.290843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.307283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.307332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.324463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.324515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.340780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.340814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.359292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.359341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.373340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.373390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.389091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.389137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.405916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.405955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.421810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.421859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.440667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.440714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.455154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.455188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.202 [2024-11-06 15:07:25.464726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.202 [2024-11-06 15:07:25.464772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.480641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.480685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.497970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.498019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.514771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.514824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.531347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.531406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.549078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.549167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.564865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.564926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.581548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.581598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.597526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.597574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.613961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.614009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.631622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.631686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.646261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.646312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.664678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.664772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-06 15:07:25.680267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-06 15:07:25.680315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.462 [2024-11-06 15:07:25.698310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.462 [2024-11-06 15:07:25.698388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.462 [2024-11-06 15:07:25.714124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.462 [2024-11-06 15:07:25.714185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.462 [2024-11-06 15:07:25.730866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.462 [2024-11-06 15:07:25.730917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.747336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.747412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.763550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.763588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.779801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.779852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.796403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.796454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.812609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.812687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.830613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.830691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.844421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.844473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.861135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.861204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.875391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.875445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.891811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.891844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.908338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.908403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.925926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.925997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.941614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.941688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.959310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.959360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.976719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-11-06 15:07:25.976783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-11-06 15:07:25.993738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.723 [2024-11-06 15:07:25.993769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.008985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.009034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.018391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.018442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.033553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.033603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.050134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.050185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.066879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.066929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.083816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.083849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.100606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.100681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.117012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.117061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.134376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.134432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.150139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.150191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.168313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.168363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.183795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.183837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.200389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.200439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.218151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.218211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.233614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.233688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.983 [2024-11-06 15:07:26.250841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.983 [2024-11-06 15:07:26.250892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.266541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.266590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.284874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.284922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.300193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.300241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.317752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.317802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.333985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.334035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.352369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.352417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.366141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.366190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.382414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.382462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.398705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.398747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.416257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.416305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.433496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.433544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.449980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.450029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.467817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.467867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.483725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.483769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.242 [2024-11-06 15:07:26.500264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.242 [2024-11-06 15:07:26.500315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.501 [2024-11-06 15:07:26.517792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.501 [2024-11-06 15:07:26.517828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.501 [2024-11-06 15:07:26.532885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.501 [2024-11-06 15:07:26.532918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.501 [2024-11-06 15:07:26.548382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.501 [2024-11-06 15:07:26.548413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.501 [2024-11-06 15:07:26.560081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.560113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.575533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.575567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.593008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.593072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.607793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.607842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.622961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.622996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.640548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.640597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.655965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.656015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.675152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.675201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.689183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.689231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.705887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.705937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.721788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.721837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.739302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.739350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.755829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.755860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.502 [2024-11-06 15:07:26.774491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.502 [2024-11-06 15:07:26.774544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.789021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.789086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.804908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.804958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.822342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.822391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.838130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.838178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.855833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.855866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.873932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.873966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.888421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.888489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.904268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.904336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.920534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-11-06 15:07:26.920602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-11-06 15:07:26.937104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:26.937174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.762 [2024-11-06 15:07:26.954551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:26.954621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.762 [2024-11-06 15:07:26.969374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:26.969442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.762 [2024-11-06 15:07:26.985596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:26.985651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.762 [2024-11-06 15:07:27.001392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:27.001447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.762 [2024-11-06 15:07:27.019518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:27.019581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.762 [2024-11-06 15:07:27.034611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.762 [2024-11-06 15:07:27.034694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.052685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.053006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.068608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.068829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.086720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.086908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.102284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.102466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.113776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.113966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.130187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.130368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.146037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.146249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.163895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.164077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.179588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.179829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.196403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.196586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.214242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.214446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.229212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.229394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.240647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.240878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.257151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.257339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.272796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.272970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-11-06 15:07:27.290297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-11-06 15:07:27.290481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.304301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.304483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.319693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.319922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.337899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.338080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.352891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.353090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.363736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.363932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.380220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.380400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.397012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.397226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.412873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.412908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.430428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.430462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.446096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.446130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.463588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.463811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.479235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.479463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.496821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.496856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.511160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.511195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.527113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.527146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.280 [2024-11-06 15:07:27.544048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.280 [2024-11-06 15:07:27.544082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.560865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.560900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.577722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.577755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.592484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.592719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.609472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.609507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.625075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.625110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.642507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.642543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.658742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.658775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.676929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.676965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.692243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.692418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.708915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.708950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.727001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.727203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.742223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.742400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.759197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.759230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.775711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.775764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.792975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.793012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.540 [2024-11-06 15:07:27.808958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.540 [2024-11-06 15:07:27.808997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.825434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.825471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.844065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.844099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.857841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.857876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.873431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.873614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.890437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.890598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.907430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.907601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.922859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.923043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.942055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.942212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.956216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.956419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.972903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.973099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.988875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.989073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:27.998314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:27.998509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:28.013884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:28.014078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:28.029386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:28.029567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:28.046179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:28.046373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.800 [2024-11-06 15:07:28.062739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.800 [2024-11-06 15:07:28.062956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.079868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.080048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.096009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.096190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.112415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.112596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.129062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.129279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.145480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.145709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.161255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.161290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.179000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.179035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.194195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.194229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.203370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.203447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.218448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.218483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.233998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.234034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.258426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.258466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.267981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.268192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.282782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.282819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.299746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.299785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.317153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.317189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.060 [2024-11-06 15:07:28.334007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.060 [2024-11-06 15:07:28.334043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.349990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.350024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.368081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.368132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.382645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.382739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.399302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.399336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.415323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.415357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.431324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.431358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.449243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.449431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.464443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.464625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.479324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.479537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.495871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.495905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.512575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.512630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.530650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.530892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.545085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.545119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.561778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.561813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.576374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.576408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.319 [2024-11-06 15:07:28.592114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.319 [2024-11-06 15:07:28.592165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.609862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.609903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.625866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.625901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.643471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.643509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.658249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.658283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.672654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.672900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.688507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.688761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.705245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.705281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.721502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.721540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.737428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.578 [2024-11-06 15:07:28.737463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.578 [2024-11-06 15:07:28.755781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.579 [2024-11-06 15:07:28.755849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.579 [2024-11-06 15:07:28.770573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.579 [2024-11-06 15:07:28.770608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.579 [2024-11-06 15:07:28.788836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.579 [2024-11-06 15:07:28.788870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.579 [2024-11-06 15:07:28.804966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.579 [2024-11-06 15:07:28.805003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.579 [2024-11-06 15:07:28.821733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.579 [2024-11-06 15:07:28.821796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.579 [2024-11-06 15:07:28.837475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.579 [2024-11-06 15:07:28.837508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.837 [2024-11-06 15:07:28.855952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.837 [2024-11-06 15:07:28.856031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.837 [2024-11-06 15:07:28.870916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.870964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.882122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.882155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.898692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.898725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.914521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.914555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.930856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.930894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.949455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.949493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.963342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.963376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.979506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.979551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:28.996658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:28.996943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.011855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.011911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.028776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.028867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.045918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.045970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.062340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.062375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.078487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.078523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.094018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.094067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.838 [2024-11-06 15:07:29.105129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.838 [2024-11-06 15:07:29.105162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.121632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.121727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.137544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.137579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.155195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.155228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.171222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.171257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.187177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.187231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.205578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.205624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.220018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.220051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.235365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.235426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.252037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.252071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.269454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.269490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.284635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.284713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.302275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.302309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.319905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.319938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.335546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.335583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.351951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.352157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.097 [2024-11-06 15:07:29.370647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.097 [2024-11-06 15:07:29.370727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.356 [2024-11-06 15:07:29.384615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.356 [2024-11-06 15:07:29.384649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.356 [2024-11-06 15:07:29.401194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.356 [2024-11-06 15:07:29.401228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.356 [2024-11-06 15:07:29.418093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.356 [2024-11-06 15:07:29.418142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.434766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.434798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.452707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.452739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.468834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.468868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.486069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.486259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.500906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.501094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.518827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.519008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.533575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.533771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.549109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.549288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.566316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.566481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.582321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.582502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.600457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.600664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.357 [2024-11-06 15:07:29.615267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.357 [2024-11-06 15:07:29.615476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.632765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.632990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.646913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.647080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.663480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.663671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.679239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.679459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.697536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.697750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.713915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.714099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.731163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.731356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.746415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.746623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.758075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.758291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.774833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.775010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.790121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.790293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.799927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.800128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.815756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.815942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.833010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.833218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.850558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.850748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.865500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.865701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.880717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.880936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.616 [2024-11-06 15:07:29.890557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.616 [2024-11-06 15:07:29.890593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.905525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.905561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.924208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.924396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.938759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.938794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.950095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.950129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.965907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.965941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.983023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.983071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:29.999253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:29.999288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.015809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.015845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.032114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.032151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.050149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.050183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.065187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.065382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.074643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.074738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.090884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.090919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.108400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.108593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.124374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.124411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.875 [2024-11-06 15:07:30.143472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.875 [2024-11-06 15:07:30.143668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-06 15:07:30.158100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-06 15:07:30.158285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-06 15:07:30.169991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-06 15:07:30.170042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-06 15:07:30.186148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-06 15:07:30.186183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.202558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.202595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.219161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.219195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 00:09:01.135 Latency(us) 00:09:01.135 [2024-11-06T15:07:30.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.135 [2024-11-06T15:07:30.410Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:01.135 Nvme1n1 : 5.01 12795.00 99.96 0.00 0.00 9993.95 3991.74 20494.89 00:09:01.135 [2024-11-06T15:07:30.410Z] =================================================================================================================== 00:09:01.135 [2024-11-06T15:07:30.410Z] Total : 12795.00 99.96 0.00 0.00 9993.95 3991.74 20494.89 00:09:01.135 [2024-11-06 15:07:30.230948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.230982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.242926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.242959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.254961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.255010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.266959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.267006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.278966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.279030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.290970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.291019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.302971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.303018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.314953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.314987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.326945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.326976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.338939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.338966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.350976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.351020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.362954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.362983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.374978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.375032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.387009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.387081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.135 [2024-11-06 15:07:30.398982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.135 [2024-11-06 15:07:30.399020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.394 [2024-11-06 15:07:30.410959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.394 [2024-11-06 15:07:30.410986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.394 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62854) - No such process 00:09:01.394 15:07:30 -- target/zcopy.sh@49 -- # wait 62854 00:09:01.394 15:07:30 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.394 15:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.394 15:07:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 15:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.394 15:07:30 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:01.394 15:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.394 15:07:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 delay0 00:09:01.394 15:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.394 15:07:30 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:01.394 15:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.394 15:07:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.394 15:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.394 15:07:30 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:01.394 [2024-11-06 15:07:30.607176] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:07.969 Initializing NVMe Controllers 00:09:07.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.969 Initialization complete. Launching workers. 00:09:07.969 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 760 00:09:07.969 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1047, failed to submit 33 00:09:07.969 success 951, unsuccess 96, failed 0 00:09:07.969 15:07:36 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:07.969 15:07:36 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:07.969 15:07:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:07.969 15:07:36 -- nvmf/common.sh@116 -- # sync 00:09:07.969 15:07:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:07.969 15:07:36 -- nvmf/common.sh@119 -- # set +e 00:09:07.969 15:07:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:07.969 15:07:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:07.969 rmmod nvme_tcp 00:09:07.969 rmmod nvme_fabrics 00:09:07.969 rmmod nvme_keyring 00:09:07.969 15:07:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:07.969 15:07:36 -- nvmf/common.sh@123 -- # set -e 00:09:07.969 15:07:36 -- nvmf/common.sh@124 -- # return 0 00:09:07.969 15:07:36 -- nvmf/common.sh@477 -- # '[' -n 62698 ']' 00:09:07.969 15:07:36 -- nvmf/common.sh@478 -- # killprocess 62698 00:09:07.969 15:07:36 -- common/autotest_common.sh@936 -- # '[' -z 62698 ']' 00:09:07.969 15:07:36 -- common/autotest_common.sh@940 -- # kill -0 62698 00:09:07.969 15:07:36 -- common/autotest_common.sh@941 -- # uname 00:09:07.969 15:07:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.969 15:07:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62698 00:09:07.969 killing process with pid 62698 00:09:07.969 15:07:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:07.970 15:07:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:07.970 15:07:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62698' 00:09:07.970 15:07:36 -- common/autotest_common.sh@955 -- # kill 62698 00:09:07.970 15:07:36 -- common/autotest_common.sh@960 -- # wait 62698 00:09:07.970 15:07:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:07.970 15:07:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:07.970 15:07:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:07.970 15:07:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.970 15:07:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:07.970 15:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.970 15:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.970 15:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.970 15:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:07.970 00:09:07.970 real 0m24.556s 00:09:07.970 user 0m40.376s 00:09:07.970 sys 0m6.466s 00:09:07.970 15:07:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.970 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.970 ************************************ 00:09:07.970 END TEST nvmf_zcopy 00:09:07.970 ************************************ 00:09:07.970 15:07:37 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.970 15:07:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:07.970 15:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.970 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.970 ************************************ 00:09:07.970 START TEST nvmf_nmic 00:09:07.970 ************************************ 00:09:07.970 15:07:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:08.229 * Looking for test storage... 00:09:08.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.229 15:07:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.229 15:07:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.229 15:07:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.229 15:07:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.229 15:07:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.229 15:07:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.229 15:07:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.229 15:07:37 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.229 15:07:37 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.229 15:07:37 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.229 15:07:37 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.229 15:07:37 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.229 15:07:37 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.229 15:07:37 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.229 15:07:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.229 15:07:37 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.229 15:07:37 -- scripts/common.sh@344 -- # : 1 00:09:08.229 15:07:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.229 15:07:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.229 15:07:37 -- scripts/common.sh@364 -- # decimal 1 00:09:08.229 15:07:37 -- scripts/common.sh@352 -- # local d=1 00:09:08.229 15:07:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.229 15:07:37 -- scripts/common.sh@354 -- # echo 1 00:09:08.229 15:07:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.229 15:07:37 -- scripts/common.sh@365 -- # decimal 2 00:09:08.229 15:07:37 -- scripts/common.sh@352 -- # local d=2 00:09:08.229 15:07:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.229 15:07:37 -- scripts/common.sh@354 -- # echo 2 00:09:08.229 15:07:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.229 15:07:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.229 15:07:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.229 15:07:37 -- scripts/common.sh@367 -- # return 0 00:09:08.229 15:07:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.229 15:07:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.229 --rc genhtml_branch_coverage=1 00:09:08.229 --rc genhtml_function_coverage=1 00:09:08.229 --rc genhtml_legend=1 00:09:08.229 --rc geninfo_all_blocks=1 00:09:08.229 --rc geninfo_unexecuted_blocks=1 00:09:08.229 00:09:08.229 ' 00:09:08.229 15:07:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.229 --rc genhtml_branch_coverage=1 00:09:08.229 --rc genhtml_function_coverage=1 00:09:08.229 --rc genhtml_legend=1 00:09:08.229 --rc geninfo_all_blocks=1 00:09:08.229 --rc geninfo_unexecuted_blocks=1 00:09:08.229 00:09:08.229 ' 00:09:08.229 15:07:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.229 --rc genhtml_branch_coverage=1 00:09:08.229 --rc genhtml_function_coverage=1 00:09:08.229 --rc genhtml_legend=1 00:09:08.229 --rc geninfo_all_blocks=1 00:09:08.229 --rc geninfo_unexecuted_blocks=1 00:09:08.229 00:09:08.229 ' 00:09:08.230 15:07:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.230 --rc genhtml_branch_coverage=1 00:09:08.230 --rc genhtml_function_coverage=1 00:09:08.230 --rc genhtml_legend=1 00:09:08.230 --rc geninfo_all_blocks=1 00:09:08.230 --rc geninfo_unexecuted_blocks=1 00:09:08.230 00:09:08.230 ' 00:09:08.230 15:07:37 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.230 15:07:37 -- nvmf/common.sh@7 -- # uname -s 00:09:08.230 15:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.230 15:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.230 15:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.230 15:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.230 15:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.230 15:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.230 15:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.230 15:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.230 15:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.230 15:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.230 15:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:09:08.230 15:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:09:08.230 15:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.230 15:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.230 15:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:08.230 15:07:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.230 15:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.230 15:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.230 15:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.230 15:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.230 15:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.230 15:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.230 15:07:37 -- paths/export.sh@5 -- # export PATH 00:09:08.230 15:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.230 15:07:37 -- nvmf/common.sh@46 -- # : 0 00:09:08.230 15:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:08.230 15:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:08.230 15:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:08.230 15:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.230 15:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.230 15:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:08.230 15:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:08.230 15:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:08.230 15:07:37 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.230 15:07:37 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.230 15:07:37 -- target/nmic.sh@14 -- # nvmftestinit 00:09:08.230 15:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:08.230 15:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.230 15:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:08.230 15:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:08.230 15:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:08.230 15:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.230 15:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.230 15:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.230 15:07:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:08.230 15:07:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:08.230 15:07:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:08.230 15:07:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:08.230 15:07:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:08.230 15:07:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:08.230 15:07:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.230 15:07:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.230 15:07:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:08.230 15:07:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:08.230 15:07:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.230 15:07:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.230 15:07:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.230 15:07:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.230 15:07:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.230 15:07:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.230 15:07:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.230 15:07:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.230 15:07:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:08.230 15:07:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:08.230 Cannot find device "nvmf_tgt_br" 00:09:08.230 15:07:37 -- nvmf/common.sh@154 -- # true 00:09:08.230 15:07:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.230 Cannot find device "nvmf_tgt_br2" 00:09:08.230 15:07:37 -- nvmf/common.sh@155 -- # true 00:09:08.230 15:07:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:08.230 15:07:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:08.230 Cannot find device "nvmf_tgt_br" 00:09:08.230 15:07:37 -- nvmf/common.sh@157 -- # true 00:09:08.230 15:07:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:08.230 Cannot find device "nvmf_tgt_br2" 00:09:08.230 15:07:37 -- nvmf/common.sh@158 -- # true 00:09:08.230 15:07:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:08.490 15:07:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:08.490 15:07:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.490 15:07:37 -- nvmf/common.sh@161 -- # true 00:09:08.490 15:07:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.490 15:07:37 -- nvmf/common.sh@162 -- # true 00:09:08.490 15:07:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.490 15:07:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.490 15:07:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.490 15:07:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.490 15:07:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.490 15:07:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.490 15:07:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.490 15:07:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:08.490 15:07:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:08.490 15:07:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:08.490 15:07:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:08.490 15:07:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:08.490 15:07:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:08.490 15:07:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:08.490 15:07:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:08.490 15:07:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:08.490 15:07:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:08.490 15:07:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:08.490 15:07:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:08.490 15:07:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:08.490 15:07:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:08.490 15:07:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:08.490 15:07:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:08.490 15:07:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:08.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:09:08.490 00:09:08.490 --- 10.0.0.2 ping statistics --- 00:09:08.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.490 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:08.490 15:07:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:08.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:08.490 00:09:08.490 --- 10.0.0.3 ping statistics --- 00:09:08.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.490 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:08.490 15:07:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:08.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:08.490 00:09:08.490 --- 10.0.0.1 ping statistics --- 00:09:08.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.490 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:08.490 15:07:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.490 15:07:37 -- nvmf/common.sh@421 -- # return 0 00:09:08.490 15:07:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:08.490 15:07:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.490 15:07:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:08.490 15:07:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:08.490 15:07:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.490 15:07:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:08.490 15:07:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:08.490 15:07:37 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:08.490 15:07:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:08.490 15:07:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.490 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.756 15:07:37 -- nvmf/common.sh@469 -- # nvmfpid=63179 00:09:08.756 15:07:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.756 15:07:37 -- nvmf/common.sh@470 -- # waitforlisten 63179 00:09:08.756 15:07:37 -- common/autotest_common.sh@829 -- # '[' -z 63179 ']' 00:09:08.756 15:07:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.756 15:07:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.756 15:07:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.756 15:07:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.756 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.756 [2024-11-06 15:07:37.823869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.756 [2024-11-06 15:07:37.824146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.756 [2024-11-06 15:07:37.963482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.029 [2024-11-06 15:07:38.032258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.029 [2024-11-06 15:07:38.032623] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.029 [2024-11-06 15:07:38.032650] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.029 [2024-11-06 15:07:38.032681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.029 [2024-11-06 15:07:38.033185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.030 [2024-11-06 15:07:38.033291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.030 [2024-11-06 15:07:38.035698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.030 [2024-11-06 15:07:38.035719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.966 15:07:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.966 15:07:38 -- common/autotest_common.sh@862 -- # return 0 00:09:09.966 15:07:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:09.966 15:07:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 15:07:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.966 15:07:38 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 [2024-11-06 15:07:38.915740] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 Malloc0 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 [2024-11-06 15:07:38.974633] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.966 test case1: single bdev can't be used in multiple subsystems 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:09.966 15:07:38 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 15:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:38 -- target/nmic.sh@28 -- # nmic_status=0 00:09:09.966 15:07:38 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:09.966 15:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 [2024-11-06 15:07:38.998481] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:09.966 [2024-11-06 15:07:38.998513] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:09.966 [2024-11-06 15:07:38.998523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.966 request: 00:09:09.966 { 00:09:09.966 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:09.966 "namespace": { 00:09:09.966 "bdev_name": "Malloc0" 00:09:09.966 }, 00:09:09.966 "method": "nvmf_subsystem_add_ns", 00:09:09.966 "req_id": 1 00:09:09.966 } 00:09:09.966 Got JSON-RPC error response 00:09:09.966 response: 00:09:09.966 { 00:09:09.966 "code": -32602, 00:09:09.966 "message": "Invalid parameters" 00:09:09.966 } 00:09:09.966 Adding namespace failed - expected result. 00:09:09.966 test case2: host connect to nvmf target in multiple paths 00:09:09.966 15:07:39 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:09.966 15:07:39 -- target/nmic.sh@29 -- # nmic_status=1 00:09:09.966 15:07:39 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:09.966 15:07:39 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:09.966 15:07:39 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:09.966 15:07:39 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:09.966 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.966 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 [2024-11-06 15:07:39.010582] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:09.966 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.966 15:07:39 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.966 15:07:39 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:10.225 15:07:39 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.225 15:07:39 -- common/autotest_common.sh@1187 -- # local i=0 00:09:10.225 15:07:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.225 15:07:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:10.225 15:07:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:12.127 15:07:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:12.127 15:07:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:12.127 15:07:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.127 15:07:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:12.127 15:07:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.127 15:07:41 -- common/autotest_common.sh@1197 -- # return 0 00:09:12.127 15:07:41 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:12.127 [global] 00:09:12.127 thread=1 00:09:12.127 invalidate=1 00:09:12.127 rw=write 00:09:12.127 time_based=1 00:09:12.127 runtime=1 00:09:12.127 ioengine=libaio 00:09:12.127 direct=1 00:09:12.127 bs=4096 00:09:12.127 iodepth=1 00:09:12.127 norandommap=0 00:09:12.127 numjobs=1 00:09:12.127 00:09:12.127 verify_dump=1 00:09:12.127 verify_backlog=512 00:09:12.127 verify_state_save=0 00:09:12.127 do_verify=1 00:09:12.127 verify=crc32c-intel 00:09:12.127 [job0] 00:09:12.127 filename=/dev/nvme0n1 00:09:12.127 Could not set queue depth (nvme0n1) 00:09:12.386 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.386 fio-3.35 00:09:12.386 Starting 1 thread 00:09:13.762 00:09:13.762 job0: (groupid=0, jobs=1): err= 0: pid=63271: Wed Nov 6 15:07:42 2024 00:09:13.762 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:13.762 slat (nsec): min=11470, max=47312, avg=13713.01, stdev=3323.81 00:09:13.762 clat (usec): min=128, max=321, avg=176.18, stdev=18.42 00:09:13.762 lat (usec): min=139, max=334, avg=189.89, stdev=18.77 00:09:13.762 clat percentiles (usec): 00:09:13.762 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:09:13.762 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:13.762 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:09:13.762 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 253], 99.95th=[ 297], 00:09:13.762 | 99.99th=[ 322] 00:09:13.762 write: IOPS=3127, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:09:13.762 slat (usec): min=17, max=100, avg=21.51, stdev= 5.58 00:09:13.762 clat (usec): min=78, max=547, avg=108.18, stdev=18.46 00:09:13.762 lat (usec): min=96, max=567, avg=129.69, stdev=20.31 00:09:13.762 clat percentiles (usec): 00:09:13.762 | 1.00th=[ 84], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 97], 00:09:13.762 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:09:13.762 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 128], 95.00th=[ 137], 00:09:13.762 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 221], 99.95th=[ 461], 00:09:13.762 | 99.99th=[ 545] 00:09:13.762 bw ( KiB/s): min=12263, max=12263, per=98.01%, avg=12263.00, stdev= 0.00, samples=1 00:09:13.762 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:13.762 lat (usec) : 100=15.61%, 250=84.28%, 500=0.10%, 750=0.02% 00:09:13.762 cpu : usr=2.60%, sys=8.50%, ctx=6203, majf=0, minf=5 00:09:13.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.762 issued rwts: total=3072,3131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.762 00:09:13.762 Run status group 0 (all jobs): 00:09:13.762 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:13.762 WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:09:13.762 00:09:13.762 Disk stats (read/write): 00:09:13.762 nvme0n1: ios=2613/3072, merge=0/0, ticks=489/379, in_queue=868, util=91.18% 00:09:13.762 15:07:42 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:13.762 15:07:42 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.762 15:07:42 -- common/autotest_common.sh@1208 -- # local i=0 00:09:13.762 15:07:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.762 15:07:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:13.762 15:07:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:13.762 15:07:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.762 15:07:42 -- common/autotest_common.sh@1220 -- # return 0 00:09:13.762 15:07:42 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:13.762 15:07:42 -- target/nmic.sh@53 -- # nvmftestfini 00:09:13.762 15:07:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:13.762 15:07:42 -- nvmf/common.sh@116 -- # sync 00:09:13.762 15:07:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:13.762 15:07:42 -- nvmf/common.sh@119 -- # set +e 00:09:13.762 15:07:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:13.762 15:07:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:13.762 rmmod nvme_tcp 00:09:13.762 rmmod nvme_fabrics 00:09:13.762 rmmod nvme_keyring 00:09:13.762 15:07:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:13.762 15:07:42 -- nvmf/common.sh@123 -- # set -e 00:09:13.762 15:07:42 -- nvmf/common.sh@124 -- # return 0 00:09:13.762 15:07:42 -- nvmf/common.sh@477 -- # '[' -n 63179 ']' 00:09:13.762 15:07:42 -- nvmf/common.sh@478 -- # killprocess 63179 00:09:13.762 15:07:42 -- common/autotest_common.sh@936 -- # '[' -z 63179 ']' 00:09:13.762 15:07:42 -- common/autotest_common.sh@940 -- # kill -0 63179 00:09:13.762 15:07:42 -- common/autotest_common.sh@941 -- # uname 00:09:13.762 15:07:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:13.762 15:07:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63179 00:09:13.762 killing process with pid 63179 00:09:13.762 15:07:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:13.762 15:07:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:13.762 15:07:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63179' 00:09:13.762 15:07:42 -- common/autotest_common.sh@955 -- # kill 63179 00:09:13.762 15:07:42 -- common/autotest_common.sh@960 -- # wait 63179 00:09:14.021 15:07:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:14.021 15:07:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:14.021 15:07:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:14.021 15:07:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.021 15:07:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:14.021 15:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.021 15:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.021 15:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.021 15:07:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:14.021 00:09:14.021 real 0m5.899s 00:09:14.021 user 0m18.862s 00:09:14.021 sys 0m2.235s 00:09:14.021 ************************************ 00:09:14.021 END TEST nvmf_nmic 00:09:14.021 ************************************ 00:09:14.021 15:07:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.021 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:09:14.021 15:07:43 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:14.021 15:07:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:14.021 15:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.022 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:09:14.022 ************************************ 00:09:14.022 START TEST nvmf_fio_target 00:09:14.022 ************************************ 00:09:14.022 15:07:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:14.022 * Looking for test storage... 00:09:14.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.022 15:07:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:14.022 15:07:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:14.022 15:07:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:14.281 15:07:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:14.281 15:07:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:14.281 15:07:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:14.281 15:07:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:14.281 15:07:43 -- scripts/common.sh@335 -- # IFS=.-: 00:09:14.281 15:07:43 -- scripts/common.sh@335 -- # read -ra ver1 00:09:14.281 15:07:43 -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.281 15:07:43 -- scripts/common.sh@336 -- # read -ra ver2 00:09:14.281 15:07:43 -- scripts/common.sh@337 -- # local 'op=<' 00:09:14.281 15:07:43 -- scripts/common.sh@339 -- # ver1_l=2 00:09:14.281 15:07:43 -- scripts/common.sh@340 -- # ver2_l=1 00:09:14.281 15:07:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:14.281 15:07:43 -- scripts/common.sh@343 -- # case "$op" in 00:09:14.281 15:07:43 -- scripts/common.sh@344 -- # : 1 00:09:14.281 15:07:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:14.281 15:07:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.281 15:07:43 -- scripts/common.sh@364 -- # decimal 1 00:09:14.281 15:07:43 -- scripts/common.sh@352 -- # local d=1 00:09:14.281 15:07:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.281 15:07:43 -- scripts/common.sh@354 -- # echo 1 00:09:14.281 15:07:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:14.281 15:07:43 -- scripts/common.sh@365 -- # decimal 2 00:09:14.281 15:07:43 -- scripts/common.sh@352 -- # local d=2 00:09:14.281 15:07:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.281 15:07:43 -- scripts/common.sh@354 -- # echo 2 00:09:14.281 15:07:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:14.281 15:07:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:14.281 15:07:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:14.281 15:07:43 -- scripts/common.sh@367 -- # return 0 00:09:14.281 15:07:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.281 15:07:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:14.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.281 --rc genhtml_branch_coverage=1 00:09:14.281 --rc genhtml_function_coverage=1 00:09:14.281 --rc genhtml_legend=1 00:09:14.281 --rc geninfo_all_blocks=1 00:09:14.281 --rc geninfo_unexecuted_blocks=1 00:09:14.281 00:09:14.281 ' 00:09:14.281 15:07:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:14.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.281 --rc genhtml_branch_coverage=1 00:09:14.281 --rc genhtml_function_coverage=1 00:09:14.281 --rc genhtml_legend=1 00:09:14.281 --rc geninfo_all_blocks=1 00:09:14.281 --rc geninfo_unexecuted_blocks=1 00:09:14.281 00:09:14.281 ' 00:09:14.281 15:07:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:14.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.281 --rc genhtml_branch_coverage=1 00:09:14.281 --rc genhtml_function_coverage=1 00:09:14.281 --rc genhtml_legend=1 00:09:14.281 --rc geninfo_all_blocks=1 00:09:14.281 --rc geninfo_unexecuted_blocks=1 00:09:14.281 00:09:14.281 ' 00:09:14.281 15:07:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:14.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.281 --rc genhtml_branch_coverage=1 00:09:14.281 --rc genhtml_function_coverage=1 00:09:14.281 --rc genhtml_legend=1 00:09:14.281 --rc geninfo_all_blocks=1 00:09:14.281 --rc geninfo_unexecuted_blocks=1 00:09:14.281 00:09:14.281 ' 00:09:14.281 15:07:43 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.281 15:07:43 -- nvmf/common.sh@7 -- # uname -s 00:09:14.281 15:07:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.281 15:07:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.281 15:07:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.281 15:07:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.281 15:07:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.281 15:07:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.281 15:07:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.281 15:07:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.281 15:07:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.281 15:07:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.281 15:07:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:09:14.281 15:07:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:09:14.281 15:07:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.281 15:07:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.281 15:07:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.281 15:07:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.281 15:07:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.281 15:07:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.281 15:07:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.281 15:07:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.281 15:07:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.281 15:07:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.281 15:07:43 -- paths/export.sh@5 -- # export PATH 00:09:14.281 15:07:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.281 15:07:43 -- nvmf/common.sh@46 -- # : 0 00:09:14.281 15:07:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:14.281 15:07:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:14.281 15:07:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:14.281 15:07:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.281 15:07:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.281 15:07:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:14.281 15:07:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:14.281 15:07:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:14.281 15:07:43 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.281 15:07:43 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.281 15:07:43 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.281 15:07:43 -- target/fio.sh@16 -- # nvmftestinit 00:09:14.281 15:07:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:14.281 15:07:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.281 15:07:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:14.281 15:07:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:14.281 15:07:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:14.281 15:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.281 15:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.281 15:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.281 15:07:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:14.281 15:07:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:14.281 15:07:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:14.281 15:07:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:14.281 15:07:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:14.281 15:07:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:14.281 15:07:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.281 15:07:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.281 15:07:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.281 15:07:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:14.281 15:07:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.281 15:07:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.281 15:07:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.281 15:07:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.281 15:07:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.282 15:07:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.282 15:07:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.282 15:07:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.282 15:07:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:14.282 15:07:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:14.282 Cannot find device "nvmf_tgt_br" 00:09:14.282 15:07:43 -- nvmf/common.sh@154 -- # true 00:09:14.282 15:07:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.282 Cannot find device "nvmf_tgt_br2" 00:09:14.282 15:07:43 -- nvmf/common.sh@155 -- # true 00:09:14.282 15:07:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:14.282 15:07:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:14.282 Cannot find device "nvmf_tgt_br" 00:09:14.282 15:07:43 -- nvmf/common.sh@157 -- # true 00:09:14.282 15:07:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:14.282 Cannot find device "nvmf_tgt_br2" 00:09:14.282 15:07:43 -- nvmf/common.sh@158 -- # true 00:09:14.282 15:07:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:14.282 15:07:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:14.282 15:07:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.282 15:07:43 -- nvmf/common.sh@161 -- # true 00:09:14.282 15:07:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.282 15:07:43 -- nvmf/common.sh@162 -- # true 00:09:14.282 15:07:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.282 15:07:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.282 15:07:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.282 15:07:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.282 15:07:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.282 15:07:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.282 15:07:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.282 15:07:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.282 15:07:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.282 15:07:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:14.282 15:07:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:14.540 15:07:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:14.540 15:07:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:14.541 15:07:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.541 15:07:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.541 15:07:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.541 15:07:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:14.541 15:07:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:14.541 15:07:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.541 15:07:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.541 15:07:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.541 15:07:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.541 15:07:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.541 15:07:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:14.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:14.541 00:09:14.541 --- 10.0.0.2 ping statistics --- 00:09:14.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.541 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:14.541 15:07:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:14.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:14.541 00:09:14.541 --- 10.0.0.3 ping statistics --- 00:09:14.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.541 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:14.541 15:07:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:14.541 00:09:14.541 --- 10.0.0.1 ping statistics --- 00:09:14.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.541 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:14.541 15:07:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.541 15:07:43 -- nvmf/common.sh@421 -- # return 0 00:09:14.541 15:07:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:14.541 15:07:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.541 15:07:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:14.541 15:07:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:14.541 15:07:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.541 15:07:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:14.541 15:07:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:14.541 15:07:43 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:14.541 15:07:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:14.541 15:07:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.541 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:09:14.541 15:07:43 -- nvmf/common.sh@469 -- # nvmfpid=63456 00:09:14.541 15:07:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.541 15:07:43 -- nvmf/common.sh@470 -- # waitforlisten 63456 00:09:14.541 15:07:43 -- common/autotest_common.sh@829 -- # '[' -z 63456 ']' 00:09:14.541 15:07:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.541 15:07:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.541 15:07:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.541 15:07:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.541 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:09:14.541 [2024-11-06 15:07:43.736356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.541 [2024-11-06 15:07:43.736469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.799 [2024-11-06 15:07:43.873729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.799 [2024-11-06 15:07:43.924489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.799 [2024-11-06 15:07:43.924901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.799 [2024-11-06 15:07:43.925029] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.799 [2024-11-06 15:07:43.925148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.799 [2024-11-06 15:07:43.925563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.799 [2024-11-06 15:07:43.925714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.799 [2024-11-06 15:07:43.925841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.799 [2024-11-06 15:07:43.925846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.735 15:07:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.735 15:07:44 -- common/autotest_common.sh@862 -- # return 0 00:09:15.735 15:07:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:15.735 15:07:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.735 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:09:15.735 15:07:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.735 15:07:44 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.735 [2024-11-06 15:07:45.002804] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.993 15:07:45 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.251 15:07:45 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:16.251 15:07:45 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.509 15:07:45 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:16.509 15:07:45 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.767 15:07:45 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:16.767 15:07:45 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.026 15:07:46 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:17.026 15:07:46 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:17.284 15:07:46 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.543 15:07:46 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:17.543 15:07:46 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.801 15:07:46 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:17.801 15:07:46 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.060 15:07:47 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:18.060 15:07:47 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:18.318 15:07:47 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.576 15:07:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.577 15:07:47 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.835 15:07:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.835 15:07:47 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.093 15:07:48 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.093 [2024-11-06 15:07:48.362792] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.351 15:07:48 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:19.351 15:07:48 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:19.610 15:07:48 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.869 15:07:49 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:19.869 15:07:49 -- common/autotest_common.sh@1187 -- # local i=0 00:09:19.869 15:07:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.869 15:07:49 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:09:19.869 15:07:49 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:09:19.869 15:07:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:21.772 15:07:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:21.772 15:07:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:21.772 15:07:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.772 15:07:51 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:09:21.772 15:07:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.772 15:07:51 -- common/autotest_common.sh@1197 -- # return 0 00:09:21.772 15:07:51 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:22.031 [global] 00:09:22.031 thread=1 00:09:22.031 invalidate=1 00:09:22.031 rw=write 00:09:22.031 time_based=1 00:09:22.031 runtime=1 00:09:22.031 ioengine=libaio 00:09:22.031 direct=1 00:09:22.031 bs=4096 00:09:22.031 iodepth=1 00:09:22.031 norandommap=0 00:09:22.031 numjobs=1 00:09:22.031 00:09:22.031 verify_dump=1 00:09:22.031 verify_backlog=512 00:09:22.031 verify_state_save=0 00:09:22.031 do_verify=1 00:09:22.031 verify=crc32c-intel 00:09:22.031 [job0] 00:09:22.031 filename=/dev/nvme0n1 00:09:22.031 [job1] 00:09:22.031 filename=/dev/nvme0n2 00:09:22.031 [job2] 00:09:22.031 filename=/dev/nvme0n3 00:09:22.031 [job3] 00:09:22.031 filename=/dev/nvme0n4 00:09:22.031 Could not set queue depth (nvme0n1) 00:09:22.031 Could not set queue depth (nvme0n2) 00:09:22.031 Could not set queue depth (nvme0n3) 00:09:22.031 Could not set queue depth (nvme0n4) 00:09:22.031 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.031 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.031 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.031 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.031 fio-3.35 00:09:22.031 Starting 4 threads 00:09:23.435 00:09:23.435 job0: (groupid=0, jobs=1): err= 0: pid=63642: Wed Nov 6 15:07:52 2024 00:09:23.435 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:23.435 slat (nsec): min=6443, max=77382, avg=13955.88, stdev=7156.67 00:09:23.435 clat (usec): min=226, max=540, avg=294.52, stdev=31.12 00:09:23.435 lat (usec): min=240, max=550, avg=308.48, stdev=32.41 00:09:23.435 clat percentiles (usec): 00:09:23.435 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:09:23.435 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:09:23.435 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 343], 00:09:23.435 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 461], 99.95th=[ 537], 00:09:23.435 | 99.99th=[ 537] 00:09:23.435 write: IOPS=1754, BW=7017KiB/s (7185kB/s)(7024KiB/1001msec); 0 zone resets 00:09:23.435 slat (usec): min=5, max=148, avg=37.11, stdev=28.35 00:09:23.435 clat (usec): min=33, max=489, avg=259.24, stdev=58.67 00:09:23.435 lat (usec): min=160, max=529, avg=296.34, stdev=63.39 00:09:23.435 clat percentiles (usec): 00:09:23.435 | 1.00th=[ 125], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 212], 00:09:23.435 | 30.00th=[ 223], 40.00th=[ 235], 50.00th=[ 255], 60.00th=[ 273], 00:09:23.435 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 367], 00:09:23.435 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 469], 99.95th=[ 490], 00:09:23.435 | 99.99th=[ 490] 00:09:23.435 bw ( KiB/s): min= 8192, max= 8192, per=23.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:23.435 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:23.436 lat (usec) : 50=0.03%, 250=28.01%, 500=71.93%, 750=0.03% 00:09:23.436 cpu : usr=1.40%, sys=5.20%, ctx=4562, majf=0, minf=9 00:09:23.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 issued rwts: total=1536,1756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.436 job1: (groupid=0, jobs=1): err= 0: pid=63643: Wed Nov 6 15:07:52 2024 00:09:23.436 read: IOPS=1765, BW=7061KiB/s (7230kB/s)(7068KiB/1001msec) 00:09:23.436 slat (nsec): min=6917, max=83996, avg=21588.54, stdev=13974.45 00:09:23.436 clat (usec): min=138, max=2782, avg=264.24, stdev=124.14 00:09:23.436 lat (usec): min=156, max=2801, avg=285.83, stdev=128.74 00:09:23.436 clat percentiles (usec): 00:09:23.436 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:09:23.436 | 30.00th=[ 174], 40.00th=[ 253], 50.00th=[ 285], 60.00th=[ 302], 00:09:23.436 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 351], 95.00th=[ 429], 00:09:23.436 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 2409], 99.95th=[ 2769], 00:09:23.436 | 99.99th=[ 2769] 00:09:23.436 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:23.436 slat (usec): min=5, max=116, avg=23.91, stdev= 7.46 00:09:23.436 clat (usec): min=94, max=7313, avg=213.94, stdev=184.20 00:09:23.436 lat (usec): min=120, max=7337, avg=237.84, stdev=183.21 00:09:23.436 clat percentiles (usec): 00:09:23.436 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 119], 00:09:23.436 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 157], 60.00th=[ 281], 00:09:23.436 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 326], 00:09:23.436 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 1057], 99.95th=[ 1401], 00:09:23.436 | 99.99th=[ 7308] 00:09:23.436 bw ( KiB/s): min=10472, max=10472, per=30.27%, avg=10472.00, stdev= 0.00, samples=1 00:09:23.436 iops : min= 2618, max= 2618, avg=2618.00, stdev= 0.00, samples=1 00:09:23.436 lat (usec) : 100=0.45%, 250=45.85%, 500=53.47%, 750=0.08% 00:09:23.436 lat (msec) : 2=0.08%, 4=0.05%, 10=0.03% 00:09:23.436 cpu : usr=2.10%, sys=5.80%, ctx=4244, majf=0, minf=15 00:09:23.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 issued rwts: total=1767,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.436 job2: (groupid=0, jobs=1): err= 0: pid=63645: Wed Nov 6 15:07:52 2024 00:09:23.436 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:23.436 slat (nsec): min=6501, max=62667, avg=15314.38, stdev=6950.05 00:09:23.436 clat (usec): min=224, max=622, avg=293.37, stdev=33.51 00:09:23.436 lat (usec): min=240, max=635, avg=308.69, stdev=33.82 00:09:23.436 clat percentiles (usec): 00:09:23.436 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 265], 00:09:23.436 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:09:23.436 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 347], 00:09:23.436 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 449], 99.95th=[ 619], 00:09:23.436 | 99.99th=[ 619] 00:09:23.436 write: IOPS=1779, BW=7117KiB/s (7288kB/s)(7124KiB/1001msec); 0 zone resets 00:09:23.436 slat (usec): min=5, max=230, avg=37.76, stdev=28.05 00:09:23.436 clat (usec): min=119, max=465, avg=254.23, stdev=54.99 00:09:23.436 lat (usec): min=180, max=496, avg=291.99, stdev=58.12 00:09:23.436 clat percentiles (usec): 00:09:23.436 | 1.00th=[ 137], 5.00th=[ 184], 10.00th=[ 196], 20.00th=[ 208], 00:09:23.436 | 30.00th=[ 217], 40.00th=[ 231], 50.00th=[ 251], 60.00th=[ 269], 00:09:23.436 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 343], 00:09:23.436 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 461], 99.95th=[ 465], 00:09:23.436 | 99.99th=[ 465] 00:09:23.436 bw ( KiB/s): min= 8192, max= 8192, per=23.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:23.436 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:23.436 lat (usec) : 250=30.36%, 500=69.61%, 750=0.03% 00:09:23.436 cpu : usr=1.50%, sys=5.60%, ctx=4529, majf=0, minf=7 00:09:23.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 issued rwts: total=1536,1781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.436 job3: (groupid=0, jobs=1): err= 0: pid=63650: Wed Nov 6 15:07:52 2024 00:09:23.436 read: IOPS=2577, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:09:23.436 slat (nsec): min=12462, max=59501, avg=15906.07, stdev=3575.74 00:09:23.436 clat (usec): min=140, max=277, avg=179.06, stdev=16.61 00:09:23.436 lat (usec): min=152, max=293, avg=194.96, stdev=17.17 00:09:23.436 clat percentiles (usec): 00:09:23.436 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:09:23.436 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:09:23.436 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:09:23.436 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 249], 99.95th=[ 249], 00:09:23.436 | 99.99th=[ 277] 00:09:23.436 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:23.436 slat (nsec): min=15433, max=90223, avg=23470.28, stdev=5476.00 00:09:23.436 clat (usec): min=95, max=475, avg=134.91, stdev=16.69 00:09:23.436 lat (usec): min=114, max=495, avg=158.38, stdev=17.55 00:09:23.436 clat percentiles (usec): 00:09:23.436 | 1.00th=[ 105], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:09:23.436 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:09:23.436 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:09:23.436 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 208], 99.95th=[ 367], 00:09:23.436 | 99.99th=[ 478] 00:09:23.436 bw ( KiB/s): min=12288, max=12288, per=35.52%, avg=12288.00, stdev= 0.00, samples=1 00:09:23.436 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:23.436 lat (usec) : 100=0.16%, 250=99.79%, 500=0.05% 00:09:23.436 cpu : usr=2.80%, sys=8.50%, ctx=5652, majf=0, minf=5 00:09:23.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.436 issued rwts: total=2580,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.436 00:09:23.436 Run status group 0 (all jobs): 00:09:23.436 READ: bw=29.0MiB/s (30.4MB/s), 6138KiB/s-10.1MiB/s (6285kB/s-10.6MB/s), io=29.0MiB (30.4MB), run=1001-1001msec 00:09:23.436 WRITE: bw=33.8MiB/s (35.4MB/s), 7017KiB/s-12.0MiB/s (7185kB/s-12.6MB/s), io=33.8MiB (35.5MB), run=1001-1001msec 00:09:23.436 00:09:23.436 Disk stats (read/write): 00:09:23.436 nvme0n1: ios=1381/1536, merge=0/0, ticks=401/390, in_queue=791, util=87.88% 00:09:23.436 nvme0n2: ios=1569/1856, merge=0/0, ticks=390/391, in_queue=781, util=87.11% 00:09:23.436 nvme0n3: ios=1349/1536, merge=0/0, ticks=384/388, in_queue=772, util=89.11% 00:09:23.436 nvme0n4: ios=2263/2560, merge=0/0, ticks=408/363, in_queue=771, util=89.76% 00:09:23.436 15:07:52 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:23.436 [global] 00:09:23.436 thread=1 00:09:23.436 invalidate=1 00:09:23.436 rw=randwrite 00:09:23.436 time_based=1 00:09:23.436 runtime=1 00:09:23.436 ioengine=libaio 00:09:23.436 direct=1 00:09:23.436 bs=4096 00:09:23.436 iodepth=1 00:09:23.436 norandommap=0 00:09:23.436 numjobs=1 00:09:23.436 00:09:23.436 verify_dump=1 00:09:23.436 verify_backlog=512 00:09:23.436 verify_state_save=0 00:09:23.436 do_verify=1 00:09:23.436 verify=crc32c-intel 00:09:23.436 [job0] 00:09:23.436 filename=/dev/nvme0n1 00:09:23.436 [job1] 00:09:23.436 filename=/dev/nvme0n2 00:09:23.436 [job2] 00:09:23.436 filename=/dev/nvme0n3 00:09:23.436 [job3] 00:09:23.436 filename=/dev/nvme0n4 00:09:23.436 Could not set queue depth (nvme0n1) 00:09:23.436 Could not set queue depth (nvme0n2) 00:09:23.436 Could not set queue depth (nvme0n3) 00:09:23.436 Could not set queue depth (nvme0n4) 00:09:23.436 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.436 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.436 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.436 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.436 fio-3.35 00:09:23.436 Starting 4 threads 00:09:24.814 00:09:24.814 job0: (groupid=0, jobs=1): err= 0: pid=63704: Wed Nov 6 15:07:53 2024 00:09:24.814 read: IOPS=3016, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:09:24.814 slat (nsec): min=12505, max=31813, avg=14067.13, stdev=1802.12 00:09:24.814 clat (usec): min=133, max=673, avg=165.91, stdev=15.01 00:09:24.814 lat (usec): min=147, max=687, avg=179.98, stdev=15.11 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:09:24.814 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:09:24.814 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 186], 00:09:24.814 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 237], 99.95th=[ 293], 00:09:24.814 | 99.99th=[ 676] 00:09:24.814 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:24.814 slat (nsec): min=18765, max=48509, avg=20838.69, stdev=2306.75 00:09:24.814 clat (usec): min=97, max=2637, avg=124.35, stdev=54.84 00:09:24.814 lat (usec): min=117, max=2659, avg=145.19, stdev=54.90 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 115], 00:09:24.814 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:09:24.814 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 143], 00:09:24.814 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 245], 99.95th=[ 1696], 00:09:24.814 | 99.99th=[ 2638] 00:09:24.814 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:24.814 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:24.814 lat (usec) : 100=0.25%, 250=99.66%, 500=0.05%, 750=0.02% 00:09:24.814 lat (msec) : 2=0.02%, 4=0.02% 00:09:24.814 cpu : usr=1.50%, sys=9.20%, ctx=6092, majf=0, minf=13 00:09:24.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 issued rwts: total=3020,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.814 job1: (groupid=0, jobs=1): err= 0: pid=63705: Wed Nov 6 15:07:53 2024 00:09:24.814 read: IOPS=1765, BW=7061KiB/s (7230kB/s)(7068KiB/1001msec) 00:09:24.814 slat (nsec): min=13038, max=47926, avg=16207.79, stdev=3645.91 00:09:24.814 clat (usec): min=179, max=550, avg=274.09, stdev=42.91 00:09:24.814 lat (usec): min=193, max=583, avg=290.30, stdev=44.91 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:09:24.814 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:09:24.814 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 375], 00:09:24.814 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 545], 99.95th=[ 553], 00:09:24.814 | 99.99th=[ 553] 00:09:24.814 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:24.814 slat (usec): min=18, max=102, avg=24.54, stdev= 6.23 00:09:24.814 clat (usec): min=103, max=2064, avg=209.55, stdev=94.31 00:09:24.814 lat (usec): min=128, max=2093, avg=234.10, stdev=96.98 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 112], 5.00th=[ 121], 10.00th=[ 131], 20.00th=[ 182], 00:09:24.814 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:09:24.814 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 334], 00:09:24.814 | 99.00th=[ 408], 99.50th=[ 553], 99.90th=[ 1975], 99.95th=[ 1975], 00:09:24.814 | 99.99th=[ 2057] 00:09:24.814 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:24.814 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:24.814 lat (usec) : 250=55.75%, 500=43.51%, 750=0.55%, 1000=0.05% 00:09:24.814 lat (msec) : 2=0.10%, 4=0.03% 00:09:24.814 cpu : usr=1.40%, sys=6.60%, ctx=3815, majf=0, minf=13 00:09:24.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 issued rwts: total=1767,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.814 job2: (groupid=0, jobs=1): err= 0: pid=63706: Wed Nov 6 15:07:53 2024 00:09:24.814 read: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec) 00:09:24.814 slat (nsec): min=11989, max=51746, avg=15032.73, stdev=3285.16 00:09:24.814 clat (usec): min=183, max=547, avg=283.70, stdev=60.71 00:09:24.814 lat (usec): min=196, max=565, avg=298.73, stdev=62.20 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:09:24.814 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 269], 00:09:24.814 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 469], 00:09:24.814 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 545], 99.95th=[ 545], 00:09:24.814 | 99.99th=[ 545] 00:09:24.814 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:24.814 slat (nsec): min=15341, max=90859, avg=22151.07, stdev=4200.51 00:09:24.814 clat (usec): min=104, max=431, avg=198.35, stdev=31.99 00:09:24.814 lat (usec): min=124, max=454, avg=220.50, stdev=32.04 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 118], 5.00th=[ 131], 10.00th=[ 145], 20.00th=[ 182], 00:09:24.814 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:09:24.814 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 241], 00:09:24.814 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 273], 99.95th=[ 400], 00:09:24.814 | 99.99th=[ 433] 00:09:24.814 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:24.814 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:24.814 lat (usec) : 250=57.94%, 500=41.10%, 750=0.96% 00:09:24.814 cpu : usr=1.90%, sys=5.60%, ctx=3860, majf=0, minf=11 00:09:24.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 issued rwts: total=1811,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.814 job3: (groupid=0, jobs=1): err= 0: pid=63707: Wed Nov 6 15:07:53 2024 00:09:24.814 read: IOPS=2617, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:09:24.814 slat (nsec): min=11990, max=33303, avg=14959.97, stdev=2369.93 00:09:24.814 clat (usec): min=143, max=248, avg=177.93, stdev=13.95 00:09:24.814 lat (usec): min=157, max=264, avg=192.89, stdev=14.40 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:09:24.814 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:09:24.814 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:09:24.814 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 229], 99.95th=[ 247], 00:09:24.814 | 99.99th=[ 249] 00:09:24.814 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:24.814 slat (usec): min=14, max=123, avg=22.56, stdev= 6.57 00:09:24.814 clat (usec): min=55, max=501, avg=134.96, stdev=14.92 00:09:24.814 lat (usec): min=120, max=521, avg=157.52, stdev=15.72 00:09:24.814 clat percentiles (usec): 00:09:24.814 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 125], 00:09:24.814 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:09:24.814 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:09:24.814 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 245], 99.95th=[ 289], 00:09:24.814 | 99.99th=[ 502] 00:09:24.814 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:24.814 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:24.814 lat (usec) : 100=0.09%, 250=99.86%, 500=0.04%, 750=0.02% 00:09:24.814 cpu : usr=1.90%, sys=9.00%, ctx=5714, majf=0, minf=11 00:09:24.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.814 issued rwts: total=2620,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.815 00:09:24.815 Run status group 0 (all jobs): 00:09:24.815 READ: bw=36.0MiB/s (37.7MB/s), 7061KiB/s-11.8MiB/s (7230kB/s-12.4MB/s), io=36.0MiB (37.8MB), run=1001-1001msec 00:09:24.815 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:09:24.815 00:09:24.815 Disk stats (read/write): 00:09:24.815 nvme0n1: ios=2609/2670, merge=0/0, ticks=447/350, in_queue=797, util=87.36% 00:09:24.815 nvme0n2: ios=1541/1692, merge=0/0, ticks=438/356, in_queue=794, util=87.84% 00:09:24.815 nvme0n3: ios=1536/1815, merge=0/0, ticks=434/360, in_queue=794, util=89.21% 00:09:24.815 nvme0n4: ios=2304/2560, merge=0/0, ticks=416/364, in_queue=780, util=89.68% 00:09:24.815 15:07:53 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:24.815 [global] 00:09:24.815 thread=1 00:09:24.815 invalidate=1 00:09:24.815 rw=write 00:09:24.815 time_based=1 00:09:24.815 runtime=1 00:09:24.815 ioengine=libaio 00:09:24.815 direct=1 00:09:24.815 bs=4096 00:09:24.815 iodepth=128 00:09:24.815 norandommap=0 00:09:24.815 numjobs=1 00:09:24.815 00:09:24.815 verify_dump=1 00:09:24.815 verify_backlog=512 00:09:24.815 verify_state_save=0 00:09:24.815 do_verify=1 00:09:24.815 verify=crc32c-intel 00:09:24.815 [job0] 00:09:24.815 filename=/dev/nvme0n1 00:09:24.815 [job1] 00:09:24.815 filename=/dev/nvme0n2 00:09:24.815 [job2] 00:09:24.815 filename=/dev/nvme0n3 00:09:24.815 [job3] 00:09:24.815 filename=/dev/nvme0n4 00:09:24.815 Could not set queue depth (nvme0n1) 00:09:24.815 Could not set queue depth (nvme0n2) 00:09:24.815 Could not set queue depth (nvme0n3) 00:09:24.815 Could not set queue depth (nvme0n4) 00:09:24.815 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.815 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.815 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.815 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.815 fio-3.35 00:09:24.815 Starting 4 threads 00:09:26.193 00:09:26.193 job0: (groupid=0, jobs=1): err= 0: pid=63767: Wed Nov 6 15:07:55 2024 00:09:26.193 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:09:26.193 slat (usec): min=11, max=7727, avg=161.74, stdev=815.41 00:09:26.193 clat (usec): min=3276, max=25234, avg=20337.22, stdev=2789.13 00:09:26.193 lat (usec): min=3290, max=25251, avg=20498.96, stdev=2696.92 00:09:26.193 clat percentiles (usec): 00:09:26.193 | 1.00th=[ 7635], 5.00th=[16909], 10.00th=[17695], 20.00th=[18220], 00:09:26.193 | 30.00th=[19268], 40.00th=[20055], 50.00th=[21365], 60.00th=[21627], 00:09:26.193 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:09:26.193 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:09:26.193 | 99.99th=[25297] 00:09:26.193 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:26.193 slat (usec): min=10, max=5704, avg=157.09, stdev=742.85 00:09:26.193 clat (usec): min=14303, max=24731, avg=21213.30, stdev=1850.69 00:09:26.193 lat (usec): min=14455, max=24763, avg=21370.39, stdev=1692.24 00:09:26.193 clat percentiles (usec): 00:09:26.193 | 1.00th=[16319], 5.00th=[18220], 10.00th=[18744], 20.00th=[19006], 00:09:26.193 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21627], 60.00th=[21890], 00:09:26.193 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23462], 95.00th=[24249], 00:09:26.193 | 99.00th=[24773], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:09:26.193 | 99.99th=[24773] 00:09:26.193 bw ( KiB/s): min=12288, max=12288, per=20.16%, avg=12288.00, stdev= 0.00, samples=2 00:09:26.193 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:26.193 lat (msec) : 4=0.21%, 10=0.53%, 20=30.58%, 50=68.69% 00:09:26.193 cpu : usr=3.69%, sys=9.17%, ctx=191, majf=0, minf=4 00:09:26.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:26.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.193 issued rwts: total=3021,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.193 job1: (groupid=0, jobs=1): err= 0: pid=63768: Wed Nov 6 15:07:55 2024 00:09:26.193 read: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1004msec) 00:09:26.193 slat (usec): min=5, max=5334, avg=158.58, stdev=788.17 00:09:26.193 clat (usec): min=219, max=23205, avg=20459.55, stdev=3038.02 00:09:26.193 lat (usec): min=3207, max=23263, avg=20618.13, stdev=2945.73 00:09:26.193 clat percentiles (usec): 00:09:26.193 | 1.00th=[ 3785], 5.00th=[16712], 10.00th=[17695], 20.00th=[18220], 00:09:26.193 | 30.00th=[20317], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:09:26.193 | 70.00th=[22152], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:09:26.193 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:09:26.193 | 99.99th=[23200] 00:09:26.193 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:26.193 slat (usec): min=18, max=6494, avg=159.87, stdev=745.64 00:09:26.193 clat (usec): min=13885, max=23841, avg=20798.47, stdev=1510.62 00:09:26.193 lat (usec): min=17949, max=23866, avg=20958.35, stdev=1326.60 00:09:26.193 clat percentiles (usec): 00:09:26.193 | 1.00th=[16319], 5.00th=[18220], 10.00th=[18744], 20.00th=[19006], 00:09:26.193 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21365], 60.00th=[21627], 00:09:26.193 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22676], 00:09:26.193 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:09:26.193 | 99.99th=[23725] 00:09:26.193 bw ( KiB/s): min=12288, max=12312, per=20.18%, avg=12300.00, stdev=16.97, samples=2 00:09:26.193 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:26.193 lat (usec) : 250=0.02% 00:09:26.193 lat (msec) : 4=0.52%, 10=0.52%, 20=25.78%, 50=73.16% 00:09:26.193 cpu : usr=3.39%, sys=9.97%, ctx=210, majf=0, minf=5 00:09:26.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:26.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.193 issued rwts: total=3041,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.194 job2: (groupid=0, jobs=1): err= 0: pid=63769: Wed Nov 6 15:07:55 2024 00:09:26.194 read: IOPS=4236, BW=16.5MiB/s (17.4MB/s)(16.6MiB/1006msec) 00:09:26.194 slat (usec): min=7, max=5118, avg=107.28, stdev=507.89 00:09:26.194 clat (usec): min=888, max=28616, avg=14216.97, stdev=3109.21 00:09:26.194 lat (usec): min=3907, max=28627, avg=14324.26, stdev=3086.88 00:09:26.194 clat percentiles (usec): 00:09:26.194 | 1.00th=[ 9503], 5.00th=[12256], 10.00th=[12649], 20.00th=[12780], 00:09:26.194 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:09:26.194 | 70.00th=[13698], 80.00th=[13960], 90.00th=[20055], 95.00th=[21103], 00:09:26.194 | 99.00th=[25560], 99.50th=[26084], 99.90th=[28705], 99.95th=[28705], 00:09:26.194 | 99.99th=[28705] 00:09:26.194 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:26.194 slat (usec): min=8, max=9996, avg=110.73, stdev=462.90 00:09:26.194 clat (usec): min=9029, max=24832, avg=14361.47, stdev=2987.63 00:09:26.194 lat (usec): min=11141, max=24850, avg=14472.20, stdev=2977.73 00:09:26.194 clat percentiles (usec): 00:09:26.194 | 1.00th=[10683], 5.00th=[12387], 10.00th=[12649], 20.00th=[12911], 00:09:26.194 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:09:26.194 | 70.00th=[13698], 80.00th=[13960], 90.00th=[20317], 95.00th=[22676], 00:09:26.194 | 99.00th=[23987], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:09:26.194 | 99.99th=[24773] 00:09:26.194 bw ( KiB/s): min=16384, max=20521, per=30.27%, avg=18452.50, stdev=2925.30, samples=2 00:09:26.194 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:09:26.194 lat (usec) : 1000=0.01% 00:09:26.194 lat (msec) : 4=0.07%, 10=0.64%, 20=88.26%, 50=11.01% 00:09:26.194 cpu : usr=3.88%, sys=12.44%, ctx=426, majf=0, minf=1 00:09:26.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:26.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.194 issued rwts: total=4262,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.194 job3: (groupid=0, jobs=1): err= 0: pid=63770: Wed Nov 6 15:07:55 2024 00:09:26.194 read: IOPS=4384, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1008msec) 00:09:26.194 slat (usec): min=5, max=5424, avg=106.30, stdev=493.61 00:09:26.194 clat (usec): min=3594, max=26710, avg=14015.38, stdev=3051.05 00:09:26.194 lat (usec): min=6858, max=26721, avg=14121.68, stdev=3032.33 00:09:26.194 clat percentiles (usec): 00:09:26.194 | 1.00th=[10028], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:09:26.194 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:09:26.194 | 70.00th=[13566], 80.00th=[13829], 90.00th=[19006], 95.00th=[21103], 00:09:26.194 | 99.00th=[25560], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:09:26.194 | 99.99th=[26608] 00:09:26.194 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:09:26.194 slat (usec): min=12, max=4463, avg=107.60, stdev=435.42 00:09:26.194 clat (usec): min=9476, max=26475, avg=14147.42, stdev=3014.56 00:09:26.194 lat (usec): min=11074, max=26496, avg=14255.02, stdev=3007.48 00:09:26.194 clat percentiles (usec): 00:09:26.194 | 1.00th=[10683], 5.00th=[12125], 10.00th=[12518], 20.00th=[12780], 00:09:26.194 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:09:26.194 | 70.00th=[13566], 80.00th=[13698], 90.00th=[17957], 95.00th=[23462], 00:09:26.194 | 99.00th=[24773], 99.50th=[25560], 99.90th=[26084], 99.95th=[26346], 00:09:26.194 | 99.99th=[26346] 00:09:26.194 bw ( KiB/s): min=16384, max=20480, per=30.24%, avg=18432.00, stdev=2896.31, samples=2 00:09:26.194 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:26.194 lat (msec) : 4=0.01%, 10=0.52%, 20=90.67%, 50=8.79% 00:09:26.194 cpu : usr=4.57%, sys=12.61%, ctx=419, majf=0, minf=1 00:09:26.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:26.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.194 issued rwts: total=4420,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.194 00:09:26.194 Run status group 0 (all jobs): 00:09:26.194 READ: bw=57.1MiB/s (59.9MB/s), 11.8MiB/s-17.1MiB/s (12.3MB/s-18.0MB/s), io=57.6MiB (60.4MB), run=1004-1008msec 00:09:26.194 WRITE: bw=59.5MiB/s (62.4MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=60.0MiB (62.9MB), run=1004-1008msec 00:09:26.194 00:09:26.194 Disk stats (read/write): 00:09:26.194 nvme0n1: ios=2609/2560, merge=0/0, ticks=13066/12135, in_queue=25201, util=88.16% 00:09:26.194 nvme0n2: ios=2560/2560, merge=0/0, ticks=12489/12285, in_queue=24774, util=87.87% 00:09:26.194 nvme0n3: ios=3968/4096, merge=0/0, ticks=11899/11962, in_queue=23861, util=88.92% 00:09:26.194 nvme0n4: ios=4096/4111, merge=0/0, ticks=11976/11639, in_queue=23615, util=89.71% 00:09:26.194 15:07:55 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:26.194 [global] 00:09:26.194 thread=1 00:09:26.194 invalidate=1 00:09:26.194 rw=randwrite 00:09:26.194 time_based=1 00:09:26.194 runtime=1 00:09:26.194 ioengine=libaio 00:09:26.194 direct=1 00:09:26.194 bs=4096 00:09:26.194 iodepth=128 00:09:26.194 norandommap=0 00:09:26.194 numjobs=1 00:09:26.194 00:09:26.194 verify_dump=1 00:09:26.194 verify_backlog=512 00:09:26.194 verify_state_save=0 00:09:26.194 do_verify=1 00:09:26.194 verify=crc32c-intel 00:09:26.194 [job0] 00:09:26.194 filename=/dev/nvme0n1 00:09:26.194 [job1] 00:09:26.194 filename=/dev/nvme0n2 00:09:26.194 [job2] 00:09:26.194 filename=/dev/nvme0n3 00:09:26.194 [job3] 00:09:26.194 filename=/dev/nvme0n4 00:09:26.194 Could not set queue depth (nvme0n1) 00:09:26.194 Could not set queue depth (nvme0n2) 00:09:26.194 Could not set queue depth (nvme0n3) 00:09:26.194 Could not set queue depth (nvme0n4) 00:09:26.194 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.194 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.194 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.194 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.194 fio-3.35 00:09:26.194 Starting 4 threads 00:09:27.572 00:09:27.572 job0: (groupid=0, jobs=1): err= 0: pid=63830: Wed Nov 6 15:07:56 2024 00:09:27.572 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:27.572 slat (usec): min=4, max=10246, avg=85.67, stdev=528.36 00:09:27.572 clat (usec): min=2081, max=23013, avg=11521.19, stdev=2372.83 00:09:27.572 lat (usec): min=2094, max=23047, avg=11606.86, stdev=2388.88 00:09:27.572 clat percentiles (usec): 00:09:27.572 | 1.00th=[ 5669], 5.00th=[ 7504], 10.00th=[10552], 20.00th=[10814], 00:09:27.572 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:27.572 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[17433], 00:09:27.572 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22414], 99.95th=[22676], 00:09:27.572 | 99.99th=[22938] 00:09:27.572 write: IOPS=5625, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:27.572 slat (usec): min=5, max=7169, avg=84.26, stdev=465.67 00:09:27.572 clat (usec): min=982, max=22223, avg=11002.07, stdev=1637.24 00:09:27.572 lat (usec): min=1003, max=22230, avg=11086.33, stdev=1593.83 00:09:27.572 clat percentiles (usec): 00:09:27.572 | 1.00th=[ 3294], 5.00th=[ 8094], 10.00th=[10159], 20.00th=[10421], 00:09:27.572 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:27.572 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12649], 95.00th=[12911], 00:09:27.572 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15664], 99.95th=[21890], 00:09:27.572 | 99.99th=[22152] 00:09:27.572 bw ( KiB/s): min=20496, max=24560, per=34.40%, avg=22528.00, stdev=2873.68, samples=2 00:09:27.572 iops : min= 5124, max= 6140, avg=5632.00, stdev=718.42, samples=2 00:09:27.572 lat (usec) : 1000=0.01% 00:09:27.572 lat (msec) : 2=0.03%, 4=0.98%, 10=7.30%, 20=90.86%, 50=0.83% 00:09:27.572 cpu : usr=5.19%, sys=13.69%, ctx=364, majf=0, minf=1 00:09:27.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:27.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.572 issued rwts: total=5632,5637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.572 job1: (groupid=0, jobs=1): err= 0: pid=63831: Wed Nov 6 15:07:56 2024 00:09:27.572 read: IOPS=5602, BW=21.9MiB/s (22.9MB/s)(21.9MiB/1002msec) 00:09:27.573 slat (usec): min=8, max=3505, avg=84.82, stdev=353.91 00:09:27.573 clat (usec): min=406, max=14406, avg=11146.76, stdev=1248.83 00:09:27.573 lat (usec): min=2961, max=15168, avg=11231.58, stdev=1265.47 00:09:27.573 clat percentiles (usec): 00:09:27.573 | 1.00th=[ 5669], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10421], 00:09:27.573 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:27.573 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12649], 95.00th=[12911], 00:09:27.573 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14353], 99.95th=[14353], 00:09:27.573 | 99.99th=[14353] 00:09:27.573 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:27.573 slat (usec): min=11, max=3146, avg=85.54, stdev=388.28 00:09:27.573 clat (usec): min=8568, max=15548, avg=11361.86, stdev=804.32 00:09:27.573 lat (usec): min=8589, max=15578, avg=11447.41, stdev=883.00 00:09:27.573 clat percentiles (usec): 00:09:27.573 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10683], 20.00th=[10945], 00:09:27.573 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:09:27.573 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12387], 95.00th=[12780], 00:09:27.573 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15533], 99.95th=[15533], 00:09:27.573 | 99.99th=[15533] 00:09:27.573 bw ( KiB/s): min=21672, max=23384, per=34.40%, avg=22528.00, stdev=1210.57, samples=2 00:09:27.573 iops : min= 5418, max= 5846, avg=5632.00, stdev=302.64, samples=2 00:09:27.573 lat (usec) : 500=0.01% 00:09:27.573 lat (msec) : 4=0.37%, 10=7.61%, 20=92.01% 00:09:27.573 cpu : usr=4.60%, sys=15.38%, ctx=468, majf=0, minf=2 00:09:27.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:27.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.573 issued rwts: total=5614,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.573 job2: (groupid=0, jobs=1): err= 0: pid=63832: Wed Nov 6 15:07:56 2024 00:09:27.573 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:09:27.573 slat (usec): min=9, max=26126, avg=179.82, stdev=1310.29 00:09:27.573 clat (usec): min=8729, max=53908, avg=24634.59, stdev=6387.23 00:09:27.573 lat (usec): min=8742, max=53947, avg=24814.41, stdev=6455.06 00:09:27.573 clat percentiles (usec): 00:09:27.573 | 1.00th=[12649], 5.00th=[17171], 10.00th=[17695], 20.00th=[18220], 00:09:27.573 | 30.00th=[19006], 40.00th=[23987], 50.00th=[24773], 60.00th=[25297], 00:09:27.573 | 70.00th=[26346], 80.00th=[29754], 90.00th=[34341], 95.00th=[36963], 00:09:27.573 | 99.00th=[37487], 99.50th=[37487], 99.90th=[42730], 99.95th=[52691], 00:09:27.573 | 99.99th=[53740] 00:09:27.573 write: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1002msec); 0 zone resets 00:09:27.573 slat (usec): min=6, max=17376, avg=135.85, stdev=868.04 00:09:27.573 clat (usec): min=1679, max=36759, avg=16525.72, stdev=3383.83 00:09:27.573 lat (usec): min=1706, max=36804, avg=16661.58, stdev=3305.64 00:09:27.573 clat percentiles (usec): 00:09:27.573 | 1.00th=[ 5800], 5.00th=[12911], 10.00th=[13435], 20.00th=[13960], 00:09:27.573 | 30.00th=[15139], 40.00th=[15664], 50.00th=[16188], 60.00th=[16712], 00:09:27.573 | 70.00th=[17433], 80.00th=[19530], 90.00th=[20317], 95.00th=[20579], 00:09:27.573 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[34341], 00:09:27.573 | 99.99th=[36963] 00:09:27.573 bw ( KiB/s): min=12288, max=12312, per=18.78%, avg=12300.00, stdev=16.97, samples=2 00:09:27.573 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:27.573 lat (msec) : 2=0.18%, 4=0.13%, 10=0.71%, 20=59.06%, 50=39.89% 00:09:27.573 lat (msec) : 100=0.03% 00:09:27.573 cpu : usr=2.60%, sys=9.39%, ctx=153, majf=0, minf=1 00:09:27.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:27.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.573 issued rwts: total=3072,3105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.573 job3: (groupid=0, jobs=1): err= 0: pid=63833: Wed Nov 6 15:07:56 2024 00:09:27.573 read: IOPS=1908, BW=7633KiB/s (7816kB/s)(7656KiB/1003msec) 00:09:27.573 slat (usec): min=10, max=15546, avg=235.47, stdev=1157.31 00:09:27.573 clat (usec): min=2273, max=57661, avg=30608.12, stdev=10096.08 00:09:27.573 lat (usec): min=5348, max=57700, avg=30843.59, stdev=10182.81 00:09:27.573 clat percentiles (usec): 00:09:27.573 | 1.00th=[ 5735], 5.00th=[19268], 10.00th=[24249], 20.00th=[24511], 00:09:27.573 | 30.00th=[24773], 40.00th=[25297], 50.00th=[26084], 60.00th=[27919], 00:09:27.573 | 70.00th=[33817], 80.00th=[40109], 90.00th=[48497], 95.00th=[49546], 00:09:27.573 | 99.00th=[51119], 99.50th=[52167], 99.90th=[57410], 99.95th=[57410], 00:09:27.573 | 99.99th=[57410] 00:09:27.573 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:09:27.573 slat (usec): min=14, max=15217, avg=259.19, stdev=1249.69 00:09:27.573 clat (usec): min=12810, max=90258, avg=32985.96, stdev=19778.90 00:09:27.573 lat (usec): min=12832, max=90301, avg=33245.16, stdev=19907.50 00:09:27.573 clat percentiles (usec): 00:09:27.573 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13173], 20.00th=[18482], 00:09:27.573 | 30.00th=[20055], 40.00th=[23987], 50.00th=[26608], 60.00th=[30278], 00:09:27.573 | 70.00th=[30802], 80.00th=[44303], 90.00th=[69731], 95.00th=[79168], 00:09:27.573 | 99.00th=[87557], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:09:27.573 | 99.99th=[90702] 00:09:27.573 bw ( KiB/s): min= 7144, max= 9240, per=12.51%, avg=8192.00, stdev=1482.10, samples=2 00:09:27.573 iops : min= 1786, max= 2310, avg=2048.00, stdev=370.52, samples=2 00:09:27.573 lat (msec) : 4=0.03%, 10=1.06%, 20=17.39%, 50=71.53%, 100=9.99% 00:09:27.573 cpu : usr=2.10%, sys=6.59%, ctx=191, majf=0, minf=13 00:09:27.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:27.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.573 issued rwts: total=1914,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.573 00:09:27.573 Run status group 0 (all jobs): 00:09:27.573 READ: bw=63.2MiB/s (66.3MB/s), 7633KiB/s-22.0MiB/s (7816kB/s-23.0MB/s), io=63.4MiB (66.5MB), run=1002-1003msec 00:09:27.573 WRITE: bw=64.0MiB/s (67.1MB/s), 8167KiB/s-22.0MiB/s (8364kB/s-23.0MB/s), io=64.1MiB (67.3MB), run=1002-1003msec 00:09:27.573 00:09:27.573 Disk stats (read/write): 00:09:27.573 nvme0n1: ios=4658/4991, merge=0/0, ticks=50535/50954, in_queue=101489, util=88.28% 00:09:27.573 nvme0n2: ios=4656/5034, merge=0/0, ticks=16080/15845, in_queue=31925, util=88.78% 00:09:27.573 nvme0n3: ios=2552/2568, merge=0/0, ticks=61685/41439, in_queue=103124, util=88.95% 00:09:27.573 nvme0n4: ios=1536/1943, merge=0/0, ticks=15870/17906, in_queue=33776, util=89.70% 00:09:27.573 15:07:56 -- target/fio.sh@55 -- # sync 00:09:27.573 15:07:56 -- target/fio.sh@59 -- # fio_pid=63846 00:09:27.573 15:07:56 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:27.573 15:07:56 -- target/fio.sh@61 -- # sleep 3 00:09:27.573 [global] 00:09:27.573 thread=1 00:09:27.573 invalidate=1 00:09:27.573 rw=read 00:09:27.573 time_based=1 00:09:27.573 runtime=10 00:09:27.573 ioengine=libaio 00:09:27.573 direct=1 00:09:27.573 bs=4096 00:09:27.573 iodepth=1 00:09:27.573 norandommap=1 00:09:27.573 numjobs=1 00:09:27.573 00:09:27.573 [job0] 00:09:27.573 filename=/dev/nvme0n1 00:09:27.573 [job1] 00:09:27.573 filename=/dev/nvme0n2 00:09:27.573 [job2] 00:09:27.573 filename=/dev/nvme0n3 00:09:27.573 [job3] 00:09:27.573 filename=/dev/nvme0n4 00:09:27.573 Could not set queue depth (nvme0n1) 00:09:27.573 Could not set queue depth (nvme0n2) 00:09:27.573 Could not set queue depth (nvme0n3) 00:09:27.573 Could not set queue depth (nvme0n4) 00:09:27.573 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.573 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.573 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.573 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.573 fio-3.35 00:09:27.573 Starting 4 threads 00:09:30.860 15:07:59 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:30.860 fio: pid=63889, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.860 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33779712, buflen=4096 00:09:30.860 15:07:59 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:30.860 fio: pid=63888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.860 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37285888, buflen=4096 00:09:30.860 15:08:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.861 15:08:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:31.119 fio: pid=63886, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:31.119 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41881600, buflen=4096 00:09:31.119 15:08:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.119 15:08:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:31.378 fio: pid=63887, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:31.378 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14262272, buflen=4096 00:09:31.378 15:08:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.378 15:08:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:31.378 00:09:31.378 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63886: Wed Nov 6 15:08:00 2024 00:09:31.378 read: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(39.9MiB/3441msec) 00:09:31.378 slat (usec): min=10, max=10661, avg=17.88, stdev=168.68 00:09:31.378 clat (usec): min=3, max=4322, avg=317.03, stdev=108.18 00:09:31.378 lat (usec): min=141, max=10834, avg=334.92, stdev=198.95 00:09:31.378 clat percentiles (usec): 00:09:31.378 | 1.00th=[ 149], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 235], 00:09:31.378 | 30.00th=[ 310], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 355], 00:09:31.378 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 383], 00:09:31.378 | 99.00th=[ 412], 99.50th=[ 453], 99.90th=[ 1303], 99.95th=[ 2180], 00:09:31.378 | 99.99th=[ 3392] 00:09:31.378 bw ( KiB/s): min=10664, max=12424, per=21.72%, avg=11142.00, stdev=706.54, samples=6 00:09:31.378 iops : min= 2666, max= 3106, avg=2785.50, stdev=176.63, samples=6 00:09:31.378 lat (usec) : 4=0.01%, 250=24.00%, 500=75.61%, 750=0.20%, 1000=0.04% 00:09:31.378 lat (msec) : 2=0.08%, 4=0.05%, 10=0.01% 00:09:31.378 cpu : usr=1.05%, sys=3.95%, ctx=10235, majf=0, minf=1 00:09:31.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 issued rwts: total=10226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.378 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63887: Wed Nov 6 15:08:00 2024 00:09:31.378 read: IOPS=5369, BW=21.0MiB/s (22.0MB/s)(77.6MiB/3700msec) 00:09:31.378 slat (usec): min=8, max=12732, avg=17.08, stdev=163.62 00:09:31.378 clat (usec): min=125, max=3414, avg=167.67, stdev=55.84 00:09:31.378 lat (usec): min=137, max=12983, avg=184.75, stdev=173.40 00:09:31.378 clat percentiles (usec): 00:09:31.378 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:31.378 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:09:31.378 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 198], 95.00th=[ 227], 00:09:31.378 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 881], 99.95th=[ 1237], 00:09:31.378 | 99.99th=[ 3032] 00:09:31.378 bw ( KiB/s): min=16522, max=23200, per=42.16%, avg=21623.71, stdev=2385.84, samples=7 00:09:31.378 iops : min= 4130, max= 5800, avg=5405.86, stdev=596.64, samples=7 00:09:31.378 lat (usec) : 250=98.86%, 500=0.97%, 750=0.04%, 1000=0.04% 00:09:31.378 lat (msec) : 2=0.07%, 4=0.02% 00:09:31.378 cpu : usr=1.62%, sys=6.70%, ctx=19886, majf=0, minf=1 00:09:31.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 issued rwts: total=19867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.378 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63888: Wed Nov 6 15:08:00 2024 00:09:31.378 read: IOPS=2853, BW=11.1MiB/s (11.7MB/s)(35.6MiB/3190msec) 00:09:31.378 slat (usec): min=9, max=7775, avg=25.57, stdev=112.46 00:09:31.378 clat (usec): min=146, max=2853, avg=322.36, stdev=59.00 00:09:31.378 lat (usec): min=159, max=8013, avg=347.93, stdev=127.12 00:09:31.378 clat percentiles (usec): 00:09:31.378 | 1.00th=[ 188], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 269], 00:09:31.378 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:09:31.378 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 371], 00:09:31.378 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 482], 99.95th=[ 603], 00:09:31.378 | 99.99th=[ 2868] 00:09:31.378 bw ( KiB/s): min=10677, max=12496, per=21.71%, avg=11132.83, stdev=734.96, samples=6 00:09:31.378 iops : min= 2669, max= 3124, avg=2783.17, stdev=183.77, samples=6 00:09:31.378 lat (usec) : 250=15.63%, 500=84.29%, 750=0.02%, 1000=0.01% 00:09:31.378 lat (msec) : 2=0.02%, 4=0.01% 00:09:31.378 cpu : usr=1.98%, sys=5.80%, ctx=9110, majf=0, minf=2 00:09:31.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 issued rwts: total=9104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.378 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63889: Wed Nov 6 15:08:00 2024 00:09:31.378 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(32.2MiB/2904msec) 00:09:31.378 slat (usec): min=10, max=304, avg=20.70, stdev= 5.82 00:09:31.378 clat (usec): min=142, max=8041, avg=329.19, stdev=114.04 00:09:31.378 lat (usec): min=160, max=8057, avg=349.89, stdev=114.49 00:09:31.378 clat percentiles (usec): 00:09:31.378 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 330], 00:09:31.378 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:09:31.378 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 379], 00:09:31.378 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 644], 99.95th=[ 1221], 00:09:31.378 | 99.99th=[ 8029] 00:09:31.378 bw ( KiB/s): min=10664, max=14624, per=22.43%, avg=11506.40, stdev=1743.43, samples=5 00:09:31.378 iops : min= 2666, max= 3656, avg=2876.60, stdev=435.86, samples=5 00:09:31.378 lat (usec) : 250=13.18%, 500=86.58%, 750=0.15%, 1000=0.02% 00:09:31.378 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01% 00:09:31.378 cpu : usr=1.24%, sys=5.82%, ctx=8252, majf=0, minf=2 00:09:31.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.378 issued rwts: total=8248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.378 00:09:31.378 Run status group 0 (all jobs): 00:09:31.378 READ: bw=50.1MiB/s (52.5MB/s), 11.1MiB/s-21.0MiB/s (11.6MB/s-22.0MB/s), io=185MiB (194MB), run=2904-3700msec 00:09:31.378 00:09:31.378 Disk stats (read/write): 00:09:31.378 nvme0n1: ios=9958/0, merge=0/0, ticks=2931/0, in_queue=2931, util=95.42% 00:09:31.378 nvme0n2: ios=19394/0, merge=0/0, ticks=3289/0, in_queue=3289, util=95.40% 00:09:31.378 nvme0n3: ios=8813/0, merge=0/0, ticks=2863/0, in_queue=2863, util=96.40% 00:09:31.378 nvme0n4: ios=8158/0, merge=0/0, ticks=2669/0, in_queue=2669, util=96.56% 00:09:31.637 15:08:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.637 15:08:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:31.896 15:08:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.896 15:08:01 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:32.155 15:08:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.155 15:08:01 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:32.414 15:08:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.414 15:08:01 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:32.672 15:08:01 -- target/fio.sh@69 -- # fio_status=0 00:09:32.672 15:08:01 -- target/fio.sh@70 -- # wait 63846 00:09:32.672 15:08:01 -- target/fio.sh@70 -- # fio_status=4 00:09:32.672 15:08:01 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.672 15:08:01 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.672 15:08:01 -- common/autotest_common.sh@1208 -- # local i=0 00:09:32.672 15:08:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.672 15:08:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:32.931 15:08:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:32.931 15:08:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.931 nvmf hotplug test: fio failed as expected 00:09:32.931 15:08:01 -- common/autotest_common.sh@1220 -- # return 0 00:09:32.931 15:08:01 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:32.931 15:08:01 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:32.931 15:08:01 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.931 15:08:02 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:32.931 15:08:02 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:32.931 15:08:02 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:32.931 15:08:02 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:32.931 15:08:02 -- target/fio.sh@91 -- # nvmftestfini 00:09:32.931 15:08:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:32.931 15:08:02 -- nvmf/common.sh@116 -- # sync 00:09:33.190 15:08:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:33.190 15:08:02 -- nvmf/common.sh@119 -- # set +e 00:09:33.190 15:08:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:33.190 15:08:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:33.190 rmmod nvme_tcp 00:09:33.190 rmmod nvme_fabrics 00:09:33.190 rmmod nvme_keyring 00:09:33.190 15:08:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:33.190 15:08:02 -- nvmf/common.sh@123 -- # set -e 00:09:33.190 15:08:02 -- nvmf/common.sh@124 -- # return 0 00:09:33.190 15:08:02 -- nvmf/common.sh@477 -- # '[' -n 63456 ']' 00:09:33.190 15:08:02 -- nvmf/common.sh@478 -- # killprocess 63456 00:09:33.190 15:08:02 -- common/autotest_common.sh@936 -- # '[' -z 63456 ']' 00:09:33.190 15:08:02 -- common/autotest_common.sh@940 -- # kill -0 63456 00:09:33.190 15:08:02 -- common/autotest_common.sh@941 -- # uname 00:09:33.190 15:08:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:33.190 15:08:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63456 00:09:33.190 killing process with pid 63456 00:09:33.190 15:08:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:33.190 15:08:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:33.190 15:08:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63456' 00:09:33.190 15:08:02 -- common/autotest_common.sh@955 -- # kill 63456 00:09:33.190 15:08:02 -- common/autotest_common.sh@960 -- # wait 63456 00:09:33.449 15:08:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:33.449 15:08:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:33.449 15:08:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:33.449 15:08:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.449 15:08:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:33.449 15:08:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.449 15:08:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.449 15:08:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.449 15:08:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:33.449 00:09:33.449 real 0m19.393s 00:09:33.449 user 1m13.771s 00:09:33.449 sys 0m10.059s 00:09:33.449 15:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.449 ************************************ 00:09:33.450 END TEST nvmf_fio_target 00:09:33.450 15:08:02 -- common/autotest_common.sh@10 -- # set +x 00:09:33.450 ************************************ 00:09:33.450 15:08:02 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:33.450 15:08:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:33.450 15:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.450 15:08:02 -- common/autotest_common.sh@10 -- # set +x 00:09:33.450 ************************************ 00:09:33.450 START TEST nvmf_bdevio 00:09:33.450 ************************************ 00:09:33.450 15:08:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:33.450 * Looking for test storage... 00:09:33.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.450 15:08:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:33.450 15:08:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:33.450 15:08:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:33.450 15:08:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:33.450 15:08:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:33.450 15:08:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:33.450 15:08:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:33.450 15:08:02 -- scripts/common.sh@335 -- # IFS=.-: 00:09:33.450 15:08:02 -- scripts/common.sh@335 -- # read -ra ver1 00:09:33.450 15:08:02 -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.450 15:08:02 -- scripts/common.sh@336 -- # read -ra ver2 00:09:33.450 15:08:02 -- scripts/common.sh@337 -- # local 'op=<' 00:09:33.450 15:08:02 -- scripts/common.sh@339 -- # ver1_l=2 00:09:33.450 15:08:02 -- scripts/common.sh@340 -- # ver2_l=1 00:09:33.450 15:08:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:33.450 15:08:02 -- scripts/common.sh@343 -- # case "$op" in 00:09:33.450 15:08:02 -- scripts/common.sh@344 -- # : 1 00:09:33.450 15:08:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:33.450 15:08:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.450 15:08:02 -- scripts/common.sh@364 -- # decimal 1 00:09:33.450 15:08:02 -- scripts/common.sh@352 -- # local d=1 00:09:33.450 15:08:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.709 15:08:02 -- scripts/common.sh@354 -- # echo 1 00:09:33.709 15:08:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:33.709 15:08:02 -- scripts/common.sh@365 -- # decimal 2 00:09:33.709 15:08:02 -- scripts/common.sh@352 -- # local d=2 00:09:33.709 15:08:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.709 15:08:02 -- scripts/common.sh@354 -- # echo 2 00:09:33.709 15:08:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:33.709 15:08:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:33.709 15:08:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:33.709 15:08:02 -- scripts/common.sh@367 -- # return 0 00:09:33.709 15:08:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.709 15:08:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:33.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.709 --rc genhtml_branch_coverage=1 00:09:33.709 --rc genhtml_function_coverage=1 00:09:33.709 --rc genhtml_legend=1 00:09:33.709 --rc geninfo_all_blocks=1 00:09:33.709 --rc geninfo_unexecuted_blocks=1 00:09:33.709 00:09:33.709 ' 00:09:33.709 15:08:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:33.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.709 --rc genhtml_branch_coverage=1 00:09:33.709 --rc genhtml_function_coverage=1 00:09:33.709 --rc genhtml_legend=1 00:09:33.709 --rc geninfo_all_blocks=1 00:09:33.709 --rc geninfo_unexecuted_blocks=1 00:09:33.709 00:09:33.709 ' 00:09:33.709 15:08:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:33.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.709 --rc genhtml_branch_coverage=1 00:09:33.709 --rc genhtml_function_coverage=1 00:09:33.709 --rc genhtml_legend=1 00:09:33.709 --rc geninfo_all_blocks=1 00:09:33.709 --rc geninfo_unexecuted_blocks=1 00:09:33.709 00:09:33.709 ' 00:09:33.709 15:08:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:33.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.709 --rc genhtml_branch_coverage=1 00:09:33.709 --rc genhtml_function_coverage=1 00:09:33.709 --rc genhtml_legend=1 00:09:33.709 --rc geninfo_all_blocks=1 00:09:33.709 --rc geninfo_unexecuted_blocks=1 00:09:33.709 00:09:33.709 ' 00:09:33.709 15:08:02 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.709 15:08:02 -- nvmf/common.sh@7 -- # uname -s 00:09:33.709 15:08:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.709 15:08:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.709 15:08:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.709 15:08:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.709 15:08:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.709 15:08:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.709 15:08:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.709 15:08:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.709 15:08:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.709 15:08:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.709 15:08:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:09:33.709 15:08:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:09:33.709 15:08:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.709 15:08:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.709 15:08:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.709 15:08:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.709 15:08:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.709 15:08:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.709 15:08:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.709 15:08:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.710 15:08:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.710 15:08:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.710 15:08:02 -- paths/export.sh@5 -- # export PATH 00:09:33.710 15:08:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.710 15:08:02 -- nvmf/common.sh@46 -- # : 0 00:09:33.710 15:08:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:33.710 15:08:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:33.710 15:08:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:33.710 15:08:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.710 15:08:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.710 15:08:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:33.710 15:08:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:33.710 15:08:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:33.710 15:08:02 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.710 15:08:02 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.710 15:08:02 -- target/bdevio.sh@14 -- # nvmftestinit 00:09:33.710 15:08:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:33.710 15:08:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.710 15:08:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:33.710 15:08:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:33.710 15:08:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:33.710 15:08:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.710 15:08:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.710 15:08:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.710 15:08:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:33.710 15:08:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:33.710 15:08:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:33.710 15:08:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:33.710 15:08:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:33.710 15:08:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:33.710 15:08:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.710 15:08:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.710 15:08:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.710 15:08:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:33.710 15:08:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.710 15:08:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.710 15:08:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.710 15:08:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.710 15:08:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.710 15:08:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.710 15:08:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.710 15:08:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.710 15:08:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:33.710 15:08:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:33.710 Cannot find device "nvmf_tgt_br" 00:09:33.710 15:08:02 -- nvmf/common.sh@154 -- # true 00:09:33.710 15:08:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.710 Cannot find device "nvmf_tgt_br2" 00:09:33.710 15:08:02 -- nvmf/common.sh@155 -- # true 00:09:33.710 15:08:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:33.710 15:08:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:33.710 Cannot find device "nvmf_tgt_br" 00:09:33.710 15:08:02 -- nvmf/common.sh@157 -- # true 00:09:33.710 15:08:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:33.710 Cannot find device "nvmf_tgt_br2" 00:09:33.710 15:08:02 -- nvmf/common.sh@158 -- # true 00:09:33.710 15:08:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:33.710 15:08:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:33.710 15:08:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.710 15:08:02 -- nvmf/common.sh@161 -- # true 00:09:33.710 15:08:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.710 15:08:02 -- nvmf/common.sh@162 -- # true 00:09:33.710 15:08:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.710 15:08:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.710 15:08:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.710 15:08:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.710 15:08:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.710 15:08:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.710 15:08:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.710 15:08:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.710 15:08:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.710 15:08:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:33.710 15:08:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:33.710 15:08:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:33.710 15:08:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:33.710 15:08:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.969 15:08:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.969 15:08:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.969 15:08:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:33.969 15:08:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:33.969 15:08:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.969 15:08:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.969 15:08:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.969 15:08:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.969 15:08:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.969 15:08:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:33.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:33.969 00:09:33.969 --- 10.0.0.2 ping statistics --- 00:09:33.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.969 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:33.969 15:08:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:33.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:09:33.969 00:09:33.969 --- 10.0.0.3 ping statistics --- 00:09:33.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.969 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:33.969 15:08:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:33.969 00:09:33.969 --- 10.0.0.1 ping statistics --- 00:09:33.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.970 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:33.970 15:08:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.970 15:08:03 -- nvmf/common.sh@421 -- # return 0 00:09:33.970 15:08:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:33.970 15:08:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.970 15:08:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:33.970 15:08:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:33.970 15:08:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.970 15:08:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:33.970 15:08:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:33.970 15:08:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:33.970 15:08:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:33.970 15:08:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.970 15:08:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.970 15:08:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:33.970 15:08:03 -- nvmf/common.sh@469 -- # nvmfpid=64159 00:09:33.970 15:08:03 -- nvmf/common.sh@470 -- # waitforlisten 64159 00:09:33.970 15:08:03 -- common/autotest_common.sh@829 -- # '[' -z 64159 ']' 00:09:33.970 15:08:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.970 15:08:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.970 15:08:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.970 15:08:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.970 15:08:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.970 [2024-11-06 15:08:03.139782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:33.970 [2024-11-06 15:08:03.139853] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.228 [2024-11-06 15:08:03.270522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.228 [2024-11-06 15:08:03.323142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.228 [2024-11-06 15:08:03.323306] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.228 [2024-11-06 15:08:03.323318] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.229 [2024-11-06 15:08:03.323325] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.229 [2024-11-06 15:08:03.323531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.229 [2024-11-06 15:08:03.323585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:34.229 [2024-11-06 15:08:03.324059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:34.229 [2024-11-06 15:08:03.324130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.164 15:08:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.164 15:08:04 -- common/autotest_common.sh@862 -- # return 0 00:09:35.164 15:08:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:35.164 15:08:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.164 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.164 15:08:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.164 15:08:04 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.164 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.164 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.164 [2024-11-06 15:08:04.149124] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.164 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.164 15:08:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.164 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.164 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.164 Malloc0 00:09:35.164 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.164 15:08:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.164 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.164 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.164 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.164 15:08:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.164 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.164 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.164 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.164 15:08:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.164 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.164 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.164 [2024-11-06 15:08:04.210159] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.164 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.164 15:08:04 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:35.164 15:08:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:35.164 15:08:04 -- nvmf/common.sh@520 -- # config=() 00:09:35.164 15:08:04 -- nvmf/common.sh@520 -- # local subsystem config 00:09:35.164 15:08:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:35.164 15:08:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:35.164 { 00:09:35.164 "params": { 00:09:35.164 "name": "Nvme$subsystem", 00:09:35.164 "trtype": "$TEST_TRANSPORT", 00:09:35.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.164 "adrfam": "ipv4", 00:09:35.164 "trsvcid": "$NVMF_PORT", 00:09:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.164 "hdgst": ${hdgst:-false}, 00:09:35.164 "ddgst": ${ddgst:-false} 00:09:35.164 }, 00:09:35.164 "method": "bdev_nvme_attach_controller" 00:09:35.164 } 00:09:35.164 EOF 00:09:35.164 )") 00:09:35.164 15:08:04 -- nvmf/common.sh@542 -- # cat 00:09:35.164 15:08:04 -- nvmf/common.sh@544 -- # jq . 00:09:35.164 15:08:04 -- nvmf/common.sh@545 -- # IFS=, 00:09:35.164 15:08:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:35.164 "params": { 00:09:35.164 "name": "Nvme1", 00:09:35.164 "trtype": "tcp", 00:09:35.164 "traddr": "10.0.0.2", 00:09:35.164 "adrfam": "ipv4", 00:09:35.164 "trsvcid": "4420", 00:09:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.164 "hdgst": false, 00:09:35.164 "ddgst": false 00:09:35.164 }, 00:09:35.164 "method": "bdev_nvme_attach_controller" 00:09:35.164 }' 00:09:35.164 [2024-11-06 15:08:04.268935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:35.164 [2024-11-06 15:08:04.269052] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64195 ] 00:09:35.164 [2024-11-06 15:08:04.410937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.424 [2024-11-06 15:08:04.479961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.424 [2024-11-06 15:08:04.480106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.424 [2024-11-06 15:08:04.480106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.424 [2024-11-06 15:08:04.616862] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:35.424 [2024-11-06 15:08:04.616911] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:35.424 I/O targets: 00:09:35.424 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:35.424 00:09:35.424 00:09:35.424 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.424 http://cunit.sourceforge.net/ 00:09:35.424 00:09:35.424 00:09:35.424 Suite: bdevio tests on: Nvme1n1 00:09:35.424 Test: blockdev write read block ...passed 00:09:35.424 Test: blockdev write zeroes read block ...passed 00:09:35.424 Test: blockdev write zeroes read no split ...passed 00:09:35.424 Test: blockdev write zeroes read split ...passed 00:09:35.424 Test: blockdev write zeroes read split partial ...passed 00:09:35.424 Test: blockdev reset ...[2024-11-06 15:08:04.649327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:35.424 [2024-11-06 15:08:04.649435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b7c80 (9): Bad file descriptor 00:09:35.424 [2024-11-06 15:08:04.666448] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:35.424 passed 00:09:35.424 Test: blockdev write read 8 blocks ...passed 00:09:35.424 Test: blockdev write read size > 128k ...passed 00:09:35.424 Test: blockdev write read invalid size ...passed 00:09:35.424 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.424 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.424 Test: blockdev write read max offset ...passed 00:09:35.424 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.424 Test: blockdev writev readv 8 blocks ...passed 00:09:35.424 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.424 Test: blockdev writev readv block ...passed 00:09:35.424 Test: blockdev writev readv size > 128k ...passed 00:09:35.424 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.424 Test: blockdev comparev and writev ...[2024-11-06 15:08:04.676741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.676789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.676811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.676823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.677307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.677359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.677370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.677759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.677792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.677810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.677821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.678368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.678399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.678418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.424 [2024-11-06 15:08:04.678428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:35.424 passed 00:09:35.424 Test: blockdev nvme passthru rw ...passed 00:09:35.424 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:08:04.679882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.424 [2024-11-06 15:08:04.679912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.680257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.424 [2024-11-06 15:08:04.680289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.680419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.424 [2024-11-06 15:08:04.680436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:35.424 [2024-11-06 15:08:04.680740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.424 [2024-11-06 15:08:04.680773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:35.424 passed 00:09:35.683 Test: blockdev nvme admin passthru ...passed 00:09:35.683 Test: blockdev copy ...passed 00:09:35.683 00:09:35.683 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.683 suites 1 1 n/a 0 0 00:09:35.683 tests 23 23 23 0 0 00:09:35.683 asserts 152 152 152 0 n/a 00:09:35.683 00:09:35.683 Elapsed time = 0.168 seconds 00:09:35.683 15:08:04 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.683 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.683 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.683 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.683 15:08:04 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:35.683 15:08:04 -- target/bdevio.sh@30 -- # nvmftestfini 00:09:35.683 15:08:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:35.683 15:08:04 -- nvmf/common.sh@116 -- # sync 00:09:35.683 15:08:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:35.683 15:08:04 -- nvmf/common.sh@119 -- # set +e 00:09:35.683 15:08:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:35.683 15:08:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:35.683 rmmod nvme_tcp 00:09:35.683 rmmod nvme_fabrics 00:09:35.683 rmmod nvme_keyring 00:09:35.941 15:08:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:35.941 15:08:04 -- nvmf/common.sh@123 -- # set -e 00:09:35.941 15:08:04 -- nvmf/common.sh@124 -- # return 0 00:09:35.941 15:08:04 -- nvmf/common.sh@477 -- # '[' -n 64159 ']' 00:09:35.941 15:08:04 -- nvmf/common.sh@478 -- # killprocess 64159 00:09:35.941 15:08:04 -- common/autotest_common.sh@936 -- # '[' -z 64159 ']' 00:09:35.941 15:08:04 -- common/autotest_common.sh@940 -- # kill -0 64159 00:09:35.941 15:08:04 -- common/autotest_common.sh@941 -- # uname 00:09:35.941 15:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.941 15:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64159 00:09:35.941 15:08:05 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:09:35.941 15:08:05 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:09:35.941 15:08:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64159' 00:09:35.941 killing process with pid 64159 00:09:35.941 15:08:05 -- common/autotest_common.sh@955 -- # kill 64159 00:09:35.941 15:08:05 -- common/autotest_common.sh@960 -- # wait 64159 00:09:36.200 15:08:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:36.200 15:08:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:36.200 15:08:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:36.200 15:08:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.200 15:08:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:36.200 15:08:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.200 15:08:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.200 15:08:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.200 15:08:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:36.200 00:09:36.200 real 0m2.679s 00:09:36.200 user 0m8.687s 00:09:36.200 sys 0m0.647s 00:09:36.200 15:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.200 ************************************ 00:09:36.200 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:09:36.200 END TEST nvmf_bdevio 00:09:36.200 ************************************ 00:09:36.200 15:08:05 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:09:36.200 15:08:05 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:09:36.200 15:08:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:36.200 15:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.200 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:09:36.200 ************************************ 00:09:36.200 START TEST nvmf_bdevio_no_huge 00:09:36.200 ************************************ 00:09:36.200 15:08:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:09:36.200 * Looking for test storage... 00:09:36.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.200 15:08:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:36.200 15:08:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:36.200 15:08:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:36.200 15:08:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:36.200 15:08:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:36.200 15:08:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:36.200 15:08:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:36.200 15:08:05 -- scripts/common.sh@335 -- # IFS=.-: 00:09:36.200 15:08:05 -- scripts/common.sh@335 -- # read -ra ver1 00:09:36.200 15:08:05 -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.200 15:08:05 -- scripts/common.sh@336 -- # read -ra ver2 00:09:36.200 15:08:05 -- scripts/common.sh@337 -- # local 'op=<' 00:09:36.200 15:08:05 -- scripts/common.sh@339 -- # ver1_l=2 00:09:36.200 15:08:05 -- scripts/common.sh@340 -- # ver2_l=1 00:09:36.200 15:08:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:36.200 15:08:05 -- scripts/common.sh@343 -- # case "$op" in 00:09:36.200 15:08:05 -- scripts/common.sh@344 -- # : 1 00:09:36.200 15:08:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:36.200 15:08:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.461 15:08:05 -- scripts/common.sh@364 -- # decimal 1 00:09:36.461 15:08:05 -- scripts/common.sh@352 -- # local d=1 00:09:36.461 15:08:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.461 15:08:05 -- scripts/common.sh@354 -- # echo 1 00:09:36.461 15:08:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:36.461 15:08:05 -- scripts/common.sh@365 -- # decimal 2 00:09:36.461 15:08:05 -- scripts/common.sh@352 -- # local d=2 00:09:36.461 15:08:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.461 15:08:05 -- scripts/common.sh@354 -- # echo 2 00:09:36.461 15:08:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:36.461 15:08:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:36.461 15:08:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:36.461 15:08:05 -- scripts/common.sh@367 -- # return 0 00:09:36.461 15:08:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.461 15:08:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.461 --rc genhtml_branch_coverage=1 00:09:36.461 --rc genhtml_function_coverage=1 00:09:36.461 --rc genhtml_legend=1 00:09:36.461 --rc geninfo_all_blocks=1 00:09:36.461 --rc geninfo_unexecuted_blocks=1 00:09:36.461 00:09:36.461 ' 00:09:36.461 15:08:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.461 --rc genhtml_branch_coverage=1 00:09:36.461 --rc genhtml_function_coverage=1 00:09:36.461 --rc genhtml_legend=1 00:09:36.461 --rc geninfo_all_blocks=1 00:09:36.461 --rc geninfo_unexecuted_blocks=1 00:09:36.461 00:09:36.461 ' 00:09:36.461 15:08:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.461 --rc genhtml_branch_coverage=1 00:09:36.461 --rc genhtml_function_coverage=1 00:09:36.461 --rc genhtml_legend=1 00:09:36.461 --rc geninfo_all_blocks=1 00:09:36.461 --rc geninfo_unexecuted_blocks=1 00:09:36.461 00:09:36.461 ' 00:09:36.461 15:08:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.461 --rc genhtml_branch_coverage=1 00:09:36.461 --rc genhtml_function_coverage=1 00:09:36.461 --rc genhtml_legend=1 00:09:36.461 --rc geninfo_all_blocks=1 00:09:36.461 --rc geninfo_unexecuted_blocks=1 00:09:36.461 00:09:36.461 ' 00:09:36.461 15:08:05 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.461 15:08:05 -- nvmf/common.sh@7 -- # uname -s 00:09:36.461 15:08:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.462 15:08:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.462 15:08:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.462 15:08:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.462 15:08:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.462 15:08:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.462 15:08:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.462 15:08:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.462 15:08:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.462 15:08:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.462 15:08:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:09:36.462 15:08:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:09:36.462 15:08:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.462 15:08:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.462 15:08:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.462 15:08:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.462 15:08:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.462 15:08:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.462 15:08:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.462 15:08:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 15:08:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 15:08:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 15:08:05 -- paths/export.sh@5 -- # export PATH 00:09:36.462 15:08:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 15:08:05 -- nvmf/common.sh@46 -- # : 0 00:09:36.462 15:08:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:36.462 15:08:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:36.462 15:08:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:36.462 15:08:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.462 15:08:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.462 15:08:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:36.462 15:08:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:36.462 15:08:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:36.462 15:08:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.462 15:08:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.462 15:08:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:09:36.462 15:08:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:36.462 15:08:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.462 15:08:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:36.462 15:08:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:36.462 15:08:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:36.462 15:08:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.462 15:08:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.462 15:08:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.462 15:08:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:36.462 15:08:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:36.462 15:08:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:36.462 15:08:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:36.462 15:08:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:36.462 15:08:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:36.462 15:08:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.462 15:08:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.462 15:08:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.462 15:08:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:36.462 15:08:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.462 15:08:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.462 15:08:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.462 15:08:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.462 15:08:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.462 15:08:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.462 15:08:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.462 15:08:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.462 15:08:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:36.462 15:08:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:36.462 Cannot find device "nvmf_tgt_br" 00:09:36.462 15:08:05 -- nvmf/common.sh@154 -- # true 00:09:36.462 15:08:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.462 Cannot find device "nvmf_tgt_br2" 00:09:36.462 15:08:05 -- nvmf/common.sh@155 -- # true 00:09:36.462 15:08:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:36.462 15:08:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:36.462 Cannot find device "nvmf_tgt_br" 00:09:36.462 15:08:05 -- nvmf/common.sh@157 -- # true 00:09:36.462 15:08:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:36.462 Cannot find device "nvmf_tgt_br2" 00:09:36.462 15:08:05 -- nvmf/common.sh@158 -- # true 00:09:36.462 15:08:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:36.462 15:08:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:36.462 15:08:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.462 15:08:05 -- nvmf/common.sh@161 -- # true 00:09:36.462 15:08:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.462 15:08:05 -- nvmf/common.sh@162 -- # true 00:09:36.462 15:08:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.462 15:08:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.462 15:08:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.462 15:08:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.462 15:08:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.462 15:08:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.462 15:08:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.721 15:08:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:36.721 15:08:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:36.721 15:08:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:36.721 15:08:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:36.721 15:08:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:36.721 15:08:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:36.721 15:08:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.721 15:08:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.721 15:08:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.721 15:08:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:36.721 15:08:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:36.721 15:08:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.721 15:08:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.721 15:08:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.721 15:08:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.721 15:08:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.721 15:08:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:36.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:36.721 00:09:36.721 --- 10.0.0.2 ping statistics --- 00:09:36.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.721 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:36.721 15:08:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:36.721 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.721 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:36.721 00:09:36.721 --- 10.0.0.3 ping statistics --- 00:09:36.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.721 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:36.721 15:08:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:36.721 00:09:36.721 --- 10.0.0.1 ping statistics --- 00:09:36.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.721 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:36.721 15:08:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.721 15:08:05 -- nvmf/common.sh@421 -- # return 0 00:09:36.721 15:08:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.721 15:08:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.721 15:08:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:36.721 15:08:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:36.721 15:08:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.721 15:08:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:36.721 15:08:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:36.721 15:08:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:36.721 15:08:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.721 15:08:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.721 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:09:36.721 15:08:05 -- nvmf/common.sh@469 -- # nvmfpid=64381 00:09:36.721 15:08:05 -- nvmf/common.sh@470 -- # waitforlisten 64381 00:09:36.721 15:08:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:09:36.721 15:08:05 -- common/autotest_common.sh@829 -- # '[' -z 64381 ']' 00:09:36.721 15:08:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.722 15:08:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.722 15:08:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.722 15:08:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.722 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:09:36.722 [2024-11-06 15:08:05.945954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:36.722 [2024-11-06 15:08:05.946077] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:09:36.980 [2024-11-06 15:08:06.094330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.980 [2024-11-06 15:08:06.191392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.980 [2024-11-06 15:08:06.191539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.980 [2024-11-06 15:08:06.191553] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.980 [2024-11-06 15:08:06.191561] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.980 [2024-11-06 15:08:06.192314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:36.980 [2024-11-06 15:08:06.192515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:36.980 [2024-11-06 15:08:06.192761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:36.980 [2024-11-06 15:08:06.192827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.917 15:08:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.917 15:08:06 -- common/autotest_common.sh@862 -- # return 0 00:09:37.917 15:08:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.917 15:08:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.917 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 15:08:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.917 15:08:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.917 15:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.917 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 [2024-11-06 15:08:06.996875] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.917 15:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.917 15:08:07 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:37.917 15:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.917 15:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 Malloc0 00:09:37.917 15:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.917 15:08:07 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.917 15:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.917 15:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 15:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.917 15:08:07 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.917 15:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.917 15:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 15:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.917 15:08:07 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.917 15:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.917 15:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 [2024-11-06 15:08:07.045051] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.917 15:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.917 15:08:07 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:09:37.917 15:08:07 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:37.917 15:08:07 -- nvmf/common.sh@520 -- # config=() 00:09:37.917 15:08:07 -- nvmf/common.sh@520 -- # local subsystem config 00:09:37.917 15:08:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:37.917 15:08:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:37.917 { 00:09:37.917 "params": { 00:09:37.917 "name": "Nvme$subsystem", 00:09:37.917 "trtype": "$TEST_TRANSPORT", 00:09:37.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.917 "adrfam": "ipv4", 00:09:37.917 "trsvcid": "$NVMF_PORT", 00:09:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.917 "hdgst": ${hdgst:-false}, 00:09:37.917 "ddgst": ${ddgst:-false} 00:09:37.917 }, 00:09:37.917 "method": "bdev_nvme_attach_controller" 00:09:37.917 } 00:09:37.917 EOF 00:09:37.917 )") 00:09:37.917 15:08:07 -- nvmf/common.sh@542 -- # cat 00:09:37.917 15:08:07 -- nvmf/common.sh@544 -- # jq . 00:09:37.917 15:08:07 -- nvmf/common.sh@545 -- # IFS=, 00:09:37.917 15:08:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:37.917 "params": { 00:09:37.917 "name": "Nvme1", 00:09:37.917 "trtype": "tcp", 00:09:37.917 "traddr": "10.0.0.2", 00:09:37.917 "adrfam": "ipv4", 00:09:37.917 "trsvcid": "4420", 00:09:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.917 "hdgst": false, 00:09:37.917 "ddgst": false 00:09:37.917 }, 00:09:37.917 "method": "bdev_nvme_attach_controller" 00:09:37.917 }' 00:09:37.917 [2024-11-06 15:08:07.095478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:37.917 [2024-11-06 15:08:07.095570] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64418 ] 00:09:38.176 [2024-11-06 15:08:07.230891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.176 [2024-11-06 15:08:07.362718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.176 [2024-11-06 15:08:07.362833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.176 [2024-11-06 15:08:07.362843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.465 [2024-11-06 15:08:07.530507] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:38.465 [2024-11-06 15:08:07.530836] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:38.465 I/O targets: 00:09:38.465 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:38.465 00:09:38.465 00:09:38.465 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.465 http://cunit.sourceforge.net/ 00:09:38.465 00:09:38.465 00:09:38.465 Suite: bdevio tests on: Nvme1n1 00:09:38.465 Test: blockdev write read block ...passed 00:09:38.465 Test: blockdev write zeroes read block ...passed 00:09:38.465 Test: blockdev write zeroes read no split ...passed 00:09:38.465 Test: blockdev write zeroes read split ...passed 00:09:38.465 Test: blockdev write zeroes read split partial ...passed 00:09:38.465 Test: blockdev reset ...[2024-11-06 15:08:07.568179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:38.465 [2024-11-06 15:08:07.568299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8680 (9): Bad file descriptor 00:09:38.465 [2024-11-06 15:08:07.588623] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:38.465 passed 00:09:38.465 Test: blockdev write read 8 blocks ...passed 00:09:38.465 Test: blockdev write read size > 128k ...passed 00:09:38.465 Test: blockdev write read invalid size ...passed 00:09:38.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:38.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:38.465 Test: blockdev write read max offset ...passed 00:09:38.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:38.465 Test: blockdev writev readv 8 blocks ...passed 00:09:38.465 Test: blockdev writev readv 30 x 1block ...passed 00:09:38.465 Test: blockdev writev readv block ...passed 00:09:38.465 Test: blockdev writev readv size > 128k ...passed 00:09:38.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:38.465 Test: blockdev comparev and writev ...[2024-11-06 15:08:07.599830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.465 [2024-11-06 15:08:07.600001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:38.465 [2024-11-06 15:08:07.600097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.465 [2024-11-06 15:08:07.600201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:38.465 [2024-11-06 15:08:07.600722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.465 [2024-11-06 15:08:07.600842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:38.465 [2024-11-06 15:08:07.600930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.465 [2024-11-06 15:08:07.601004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:38.465 [2024-11-06 15:08:07.601539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.465 [2024-11-06 15:08:07.601661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:38.465 [2024-11-06 15:08:07.601775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.465 [2024-11-06 15:08:07.601864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:38.466 [2024-11-06 15:08:07.602358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.466 [2024-11-06 15:08:07.602475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:38.466 [2024-11-06 15:08:07.602556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:38.466 [2024-11-06 15:08:07.602619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:38.466 passed 00:09:38.466 Test: blockdev nvme passthru rw ...passed 00:09:38.466 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:08:07.603787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:38.466 [2024-11-06 15:08:07.603897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:38.466 [2024-11-06 15:08:07.604189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:38.466 [2024-11-06 15:08:07.604303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:38.466 [2024-11-06 15:08:07.604585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:38.466 [2024-11-06 15:08:07.604706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:38.466 [2024-11-06 15:08:07.604990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:38.466 [2024-11-06 15:08:07.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:38.466 passed 00:09:38.466 Test: blockdev nvme admin passthru ...passed 00:09:38.466 Test: blockdev copy ...passed 00:09:38.466 00:09:38.466 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.466 suites 1 1 n/a 0 0 00:09:38.466 tests 23 23 23 0 0 00:09:38.466 asserts 152 152 152 0 n/a 00:09:38.466 00:09:38.466 Elapsed time = 0.173 seconds 00:09:38.724 15:08:07 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.725 15:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.725 15:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:38.725 15:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.725 15:08:07 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:38.725 15:08:07 -- target/bdevio.sh@30 -- # nvmftestfini 00:09:38.725 15:08:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:38.725 15:08:07 -- nvmf/common.sh@116 -- # sync 00:09:38.983 15:08:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:38.983 15:08:08 -- nvmf/common.sh@119 -- # set +e 00:09:38.984 15:08:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:38.984 15:08:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:38.984 rmmod nvme_tcp 00:09:38.984 rmmod nvme_fabrics 00:09:38.984 rmmod nvme_keyring 00:09:38.984 15:08:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:38.984 15:08:08 -- nvmf/common.sh@123 -- # set -e 00:09:38.984 15:08:08 -- nvmf/common.sh@124 -- # return 0 00:09:38.984 15:08:08 -- nvmf/common.sh@477 -- # '[' -n 64381 ']' 00:09:38.984 15:08:08 -- nvmf/common.sh@478 -- # killprocess 64381 00:09:38.984 15:08:08 -- common/autotest_common.sh@936 -- # '[' -z 64381 ']' 00:09:38.984 15:08:08 -- common/autotest_common.sh@940 -- # kill -0 64381 00:09:38.984 15:08:08 -- common/autotest_common.sh@941 -- # uname 00:09:38.984 15:08:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:38.984 15:08:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64381 00:09:38.984 killing process with pid 64381 00:09:38.984 15:08:08 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:09:38.984 15:08:08 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:09:38.984 15:08:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64381' 00:09:38.984 15:08:08 -- common/autotest_common.sh@955 -- # kill 64381 00:09:38.984 15:08:08 -- common/autotest_common.sh@960 -- # wait 64381 00:09:39.242 15:08:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:39.242 15:08:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:39.242 15:08:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:39.242 15:08:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.242 15:08:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:39.242 15:08:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.242 15:08:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.243 15:08:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.243 15:08:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:39.243 00:09:39.243 real 0m3.180s 00:09:39.243 user 0m10.218s 00:09:39.243 sys 0m1.154s 00:09:39.243 15:08:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:39.243 15:08:08 -- common/autotest_common.sh@10 -- # set +x 00:09:39.243 ************************************ 00:09:39.243 END TEST nvmf_bdevio_no_huge 00:09:39.243 ************************************ 00:09:39.502 15:08:08 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:09:39.502 15:08:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:39.502 15:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.502 15:08:08 -- common/autotest_common.sh@10 -- # set +x 00:09:39.502 ************************************ 00:09:39.502 START TEST nvmf_tls 00:09:39.502 ************************************ 00:09:39.502 15:08:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:09:39.502 * Looking for test storage... 00:09:39.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:39.502 15:08:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:39.502 15:08:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:39.502 15:08:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:39.502 15:08:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:39.502 15:08:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:39.502 15:08:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:39.502 15:08:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:39.502 15:08:08 -- scripts/common.sh@335 -- # IFS=.-: 00:09:39.502 15:08:08 -- scripts/common.sh@335 -- # read -ra ver1 00:09:39.502 15:08:08 -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.502 15:08:08 -- scripts/common.sh@336 -- # read -ra ver2 00:09:39.502 15:08:08 -- scripts/common.sh@337 -- # local 'op=<' 00:09:39.502 15:08:08 -- scripts/common.sh@339 -- # ver1_l=2 00:09:39.502 15:08:08 -- scripts/common.sh@340 -- # ver2_l=1 00:09:39.502 15:08:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:39.502 15:08:08 -- scripts/common.sh@343 -- # case "$op" in 00:09:39.502 15:08:08 -- scripts/common.sh@344 -- # : 1 00:09:39.502 15:08:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:39.502 15:08:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.502 15:08:08 -- scripts/common.sh@364 -- # decimal 1 00:09:39.502 15:08:08 -- scripts/common.sh@352 -- # local d=1 00:09:39.502 15:08:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.502 15:08:08 -- scripts/common.sh@354 -- # echo 1 00:09:39.502 15:08:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:39.502 15:08:08 -- scripts/common.sh@365 -- # decimal 2 00:09:39.502 15:08:08 -- scripts/common.sh@352 -- # local d=2 00:09:39.502 15:08:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.502 15:08:08 -- scripts/common.sh@354 -- # echo 2 00:09:39.502 15:08:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:39.502 15:08:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:39.502 15:08:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:39.502 15:08:08 -- scripts/common.sh@367 -- # return 0 00:09:39.502 15:08:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.502 15:08:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.502 --rc genhtml_branch_coverage=1 00:09:39.502 --rc genhtml_function_coverage=1 00:09:39.502 --rc genhtml_legend=1 00:09:39.502 --rc geninfo_all_blocks=1 00:09:39.502 --rc geninfo_unexecuted_blocks=1 00:09:39.502 00:09:39.502 ' 00:09:39.502 15:08:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.502 --rc genhtml_branch_coverage=1 00:09:39.502 --rc genhtml_function_coverage=1 00:09:39.502 --rc genhtml_legend=1 00:09:39.502 --rc geninfo_all_blocks=1 00:09:39.502 --rc geninfo_unexecuted_blocks=1 00:09:39.502 00:09:39.502 ' 00:09:39.502 15:08:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.502 --rc genhtml_branch_coverage=1 00:09:39.502 --rc genhtml_function_coverage=1 00:09:39.502 --rc genhtml_legend=1 00:09:39.502 --rc geninfo_all_blocks=1 00:09:39.502 --rc geninfo_unexecuted_blocks=1 00:09:39.502 00:09:39.502 ' 00:09:39.502 15:08:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.502 --rc genhtml_branch_coverage=1 00:09:39.502 --rc genhtml_function_coverage=1 00:09:39.502 --rc genhtml_legend=1 00:09:39.502 --rc geninfo_all_blocks=1 00:09:39.502 --rc geninfo_unexecuted_blocks=1 00:09:39.502 00:09:39.502 ' 00:09:39.502 15:08:08 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.502 15:08:08 -- nvmf/common.sh@7 -- # uname -s 00:09:39.502 15:08:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.502 15:08:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.502 15:08:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.502 15:08:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.502 15:08:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.502 15:08:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.502 15:08:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.502 15:08:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.502 15:08:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.502 15:08:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.502 15:08:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:09:39.502 15:08:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:09:39.502 15:08:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.502 15:08:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.502 15:08:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.502 15:08:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.502 15:08:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.502 15:08:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.502 15:08:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.502 15:08:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.502 15:08:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.502 15:08:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.502 15:08:08 -- paths/export.sh@5 -- # export PATH 00:09:39.502 15:08:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.502 15:08:08 -- nvmf/common.sh@46 -- # : 0 00:09:39.502 15:08:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:39.502 15:08:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:39.502 15:08:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:39.502 15:08:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.502 15:08:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.502 15:08:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:39.502 15:08:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:39.502 15:08:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:39.502 15:08:08 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.502 15:08:08 -- target/tls.sh@71 -- # nvmftestinit 00:09:39.502 15:08:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:39.502 15:08:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.502 15:08:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:39.502 15:08:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:39.502 15:08:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:39.502 15:08:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.502 15:08:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.502 15:08:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.502 15:08:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:39.502 15:08:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:39.502 15:08:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:39.502 15:08:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:39.502 15:08:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:39.502 15:08:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:39.502 15:08:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.502 15:08:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.502 15:08:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:39.502 15:08:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:39.502 15:08:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:39.502 15:08:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:39.502 15:08:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:39.502 15:08:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.502 15:08:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:39.502 15:08:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:39.502 15:08:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:39.502 15:08:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:39.503 15:08:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:39.503 15:08:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:39.503 Cannot find device "nvmf_tgt_br" 00:09:39.503 15:08:08 -- nvmf/common.sh@154 -- # true 00:09:39.503 15:08:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.503 Cannot find device "nvmf_tgt_br2" 00:09:39.503 15:08:08 -- nvmf/common.sh@155 -- # true 00:09:39.503 15:08:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:39.503 15:08:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:39.503 Cannot find device "nvmf_tgt_br" 00:09:39.503 15:08:08 -- nvmf/common.sh@157 -- # true 00:09:39.503 15:08:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:39.761 Cannot find device "nvmf_tgt_br2" 00:09:39.761 15:08:08 -- nvmf/common.sh@158 -- # true 00:09:39.761 15:08:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:39.761 15:08:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:39.761 15:08:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.761 15:08:08 -- nvmf/common.sh@161 -- # true 00:09:39.761 15:08:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.761 15:08:08 -- nvmf/common.sh@162 -- # true 00:09:39.761 15:08:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:39.761 15:08:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:39.761 15:08:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:39.761 15:08:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:39.761 15:08:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:39.761 15:08:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:39.761 15:08:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:39.761 15:08:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:39.761 15:08:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:39.761 15:08:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:39.761 15:08:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:39.761 15:08:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:39.761 15:08:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:39.761 15:08:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:39.761 15:08:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:39.761 15:08:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:39.761 15:08:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:39.761 15:08:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:39.761 15:08:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:39.761 15:08:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:39.761 15:08:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:39.761 15:08:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:39.761 15:08:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:39.761 15:08:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:39.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:39.761 00:09:39.761 --- 10.0.0.2 ping statistics --- 00:09:39.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.761 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:39.761 15:08:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:39.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:39.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:39.761 00:09:39.761 --- 10.0.0.3 ping statistics --- 00:09:39.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.761 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:39.761 15:08:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:39.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:39.761 00:09:39.761 --- 10.0.0.1 ping statistics --- 00:09:39.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.761 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:39.761 15:08:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.761 15:08:09 -- nvmf/common.sh@421 -- # return 0 00:09:39.761 15:08:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:39.761 15:08:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.761 15:08:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:39.761 15:08:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:39.761 15:08:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.761 15:08:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:39.761 15:08:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:40.020 15:08:09 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:09:40.020 15:08:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:40.020 15:08:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.020 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:40.020 15:08:09 -- nvmf/common.sh@469 -- # nvmfpid=64600 00:09:40.020 15:08:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:09:40.020 15:08:09 -- nvmf/common.sh@470 -- # waitforlisten 64600 00:09:40.020 15:08:09 -- common/autotest_common.sh@829 -- # '[' -z 64600 ']' 00:09:40.020 15:08:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.020 15:08:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.020 15:08:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.020 15:08:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.020 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:40.020 [2024-11-06 15:08:09.105548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:40.020 [2024-11-06 15:08:09.105687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.020 [2024-11-06 15:08:09.248908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.280 [2024-11-06 15:08:09.316035] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.280 [2024-11-06 15:08:09.316226] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.280 [2024-11-06 15:08:09.316241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.280 [2024-11-06 15:08:09.316252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.280 [2024-11-06 15:08:09.316288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.847 15:08:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.847 15:08:10 -- common/autotest_common.sh@862 -- # return 0 00:09:40.847 15:08:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:40.847 15:08:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:40.847 15:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.847 15:08:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.847 15:08:10 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:09:40.847 15:08:10 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:09:41.106 true 00:09:41.106 15:08:10 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:41.106 15:08:10 -- target/tls.sh@82 -- # jq -r .tls_version 00:09:41.364 15:08:10 -- target/tls.sh@82 -- # version=0 00:09:41.364 15:08:10 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:09:41.364 15:08:10 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:09:41.622 15:08:10 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:41.622 15:08:10 -- target/tls.sh@90 -- # jq -r .tls_version 00:09:41.880 15:08:11 -- target/tls.sh@90 -- # version=13 00:09:41.880 15:08:11 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:09:41.880 15:08:11 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:09:42.447 15:08:11 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:42.447 15:08:11 -- target/tls.sh@98 -- # jq -r .tls_version 00:09:42.447 15:08:11 -- target/tls.sh@98 -- # version=7 00:09:42.447 15:08:11 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:09:42.447 15:08:11 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:42.447 15:08:11 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:09:42.705 15:08:11 -- target/tls.sh@105 -- # ktls=false 00:09:42.706 15:08:11 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:09:42.706 15:08:11 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:09:42.964 15:08:12 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:42.964 15:08:12 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:09:43.223 15:08:12 -- target/tls.sh@113 -- # ktls=true 00:09:43.223 15:08:12 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:09:43.223 15:08:12 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:09:43.482 15:08:12 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:09:43.482 15:08:12 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:43.741 15:08:12 -- target/tls.sh@121 -- # ktls=false 00:09:43.741 15:08:12 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:09:43.741 15:08:12 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:09:43.741 15:08:12 -- target/tls.sh@49 -- # local key hash crc 00:09:43.741 15:08:12 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:09:43.741 15:08:12 -- target/tls.sh@51 -- # hash=01 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # tail -c8 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # head -c 4 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # gzip -1 -c 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # crc='p$H�' 00:09:43.741 15:08:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:43.741 15:08:12 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:09:43.741 15:08:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:43.741 15:08:12 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:43.741 15:08:12 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:09:43.741 15:08:12 -- target/tls.sh@49 -- # local key hash crc 00:09:43.741 15:08:12 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:09:43.741 15:08:12 -- target/tls.sh@51 -- # hash=01 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # gzip -1 -c 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # tail -c8 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # head -c 4 00:09:43.741 15:08:12 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:09:43.741 15:08:12 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:09:43.742 15:08:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:43.742 15:08:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:43.742 15:08:12 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:43.742 15:08:12 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:43.742 15:08:12 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:43.742 15:08:12 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:43.742 15:08:12 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:43.742 15:08:12 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:43.742 15:08:12 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:43.742 15:08:12 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:09:44.000 15:08:13 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:44.568 15:08:13 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:44.569 15:08:13 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:44.569 15:08:13 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:09:44.569 [2024-11-06 15:08:13.782333] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.569 15:08:13 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:09:44.827 15:08:14 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:09:45.086 [2024-11-06 15:08:14.250432] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:09:45.086 [2024-11-06 15:08:14.250736] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.086 15:08:14 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:09:45.345 malloc0 00:09:45.345 15:08:14 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:45.604 15:08:14 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:45.862 15:08:15 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:58.068 Initializing NVMe Controllers 00:09:58.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:58.068 Initialization complete. Launching workers. 00:09:58.068 ======================================================== 00:09:58.068 Latency(us) 00:09:58.068 Device Information : IOPS MiB/s Average min max 00:09:58.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10593.24 41.38 6042.68 1588.10 8467.24 00:09:58.068 ======================================================== 00:09:58.068 Total : 10593.24 41.38 6042.68 1588.10 8467.24 00:09:58.068 00:09:58.068 15:08:25 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:58.068 15:08:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:58.068 15:08:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:58.068 15:08:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:58.068 15:08:25 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:09:58.068 15:08:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:58.068 15:08:25 -- target/tls.sh@28 -- # bdevperf_pid=64848 00:09:58.068 15:08:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:58.068 15:08:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:58.068 15:08:25 -- target/tls.sh@31 -- # waitforlisten 64848 /var/tmp/bdevperf.sock 00:09:58.068 15:08:25 -- common/autotest_common.sh@829 -- # '[' -z 64848 ']' 00:09:58.068 15:08:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:58.068 15:08:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:58.068 15:08:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:58.068 15:08:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.068 15:08:25 -- common/autotest_common.sh@10 -- # set +x 00:09:58.068 [2024-11-06 15:08:25.247765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:58.068 [2024-11-06 15:08:25.247883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64848 ] 00:09:58.068 [2024-11-06 15:08:25.387970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.068 [2024-11-06 15:08:25.456419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.068 15:08:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.068 15:08:26 -- common/autotest_common.sh@862 -- # return 0 00:09:58.068 15:08:26 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:58.068 [2024-11-06 15:08:26.432010] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:58.068 TLSTESTn1 00:09:58.068 15:08:26 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:09:58.068 Running I/O for 10 seconds... 00:10:08.044 00:10:08.044 Latency(us) 00:10:08.044 [2024-11-06T15:08:37.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.044 [2024-11-06T15:08:37.319Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:08.044 Verification LBA range: start 0x0 length 0x2000 00:10:08.044 TLSTESTn1 : 10.01 6081.65 23.76 0.00 0.00 21011.22 4557.73 23712.12 00:10:08.044 [2024-11-06T15:08:37.319Z] =================================================================================================================== 00:10:08.044 [2024-11-06T15:08:37.319Z] Total : 6081.65 23.76 0.00 0.00 21011.22 4557.73 23712.12 00:10:08.044 0 00:10:08.044 15:08:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.044 15:08:36 -- target/tls.sh@45 -- # killprocess 64848 00:10:08.044 15:08:36 -- common/autotest_common.sh@936 -- # '[' -z 64848 ']' 00:10:08.044 15:08:36 -- common/autotest_common.sh@940 -- # kill -0 64848 00:10:08.044 15:08:36 -- common/autotest_common.sh@941 -- # uname 00:10:08.044 15:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.044 15:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64848 00:10:08.044 killing process with pid 64848 00:10:08.044 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.044 00:10:08.044 Latency(us) 00:10:08.044 [2024-11-06T15:08:37.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.044 [2024-11-06T15:08:37.319Z] =================================================================================================================== 00:10:08.044 [2024-11-06T15:08:37.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.044 15:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:08.044 15:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:08.044 15:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64848' 00:10:08.044 15:08:36 -- common/autotest_common.sh@955 -- # kill 64848 00:10:08.044 15:08:36 -- common/autotest_common.sh@960 -- # wait 64848 00:10:08.044 15:08:36 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:08.044 15:08:36 -- common/autotest_common.sh@650 -- # local es=0 00:10:08.044 15:08:36 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:08.044 15:08:36 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:08.044 15:08:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.044 15:08:36 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:08.044 15:08:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.044 15:08:36 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:08.044 15:08:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:08.044 15:08:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:08.044 15:08:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:08.044 15:08:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:10:08.044 15:08:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.044 15:08:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:08.044 15:08:36 -- target/tls.sh@28 -- # bdevperf_pid=64981 00:10:08.044 15:08:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:08.044 15:08:36 -- target/tls.sh@31 -- # waitforlisten 64981 /var/tmp/bdevperf.sock 00:10:08.044 15:08:36 -- common/autotest_common.sh@829 -- # '[' -z 64981 ']' 00:10:08.044 15:08:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.044 15:08:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.044 15:08:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.044 15:08:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.044 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:10:08.045 [2024-11-06 15:08:36.915768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.045 [2024-11-06 15:08:36.916022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64981 ] 00:10:08.045 [2024-11-06 15:08:37.048209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.045 [2024-11-06 15:08:37.103869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.981 15:08:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.981 15:08:37 -- common/autotest_common.sh@862 -- # return 0 00:10:08.981 15:08:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:08.981 [2024-11-06 15:08:38.150555] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:08.981 [2024-11-06 15:08:38.159899] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:08.981 [2024-11-06 15:08:38.159987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c0650 (107): Transport endpoint is not connected 00:10:08.981 [2024-11-06 15:08:38.160962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c0650 (9): Bad file descriptor 00:10:08.981 [2024-11-06 15:08:38.161962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:08.981 [2024-11-06 15:08:38.162005] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:08.981 [2024-11-06 15:08:38.162016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:08.981 request: 00:10:08.981 { 00:10:08.981 "name": "TLSTEST", 00:10:08.981 "trtype": "tcp", 00:10:08.981 "traddr": "10.0.0.2", 00:10:08.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.981 "adrfam": "ipv4", 00:10:08.981 "trsvcid": "4420", 00:10:08.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.981 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:10:08.981 "method": "bdev_nvme_attach_controller", 00:10:08.981 "req_id": 1 00:10:08.981 } 00:10:08.981 Got JSON-RPC error response 00:10:08.981 response: 00:10:08.981 { 00:10:08.981 "code": -32602, 00:10:08.981 "message": "Invalid parameters" 00:10:08.981 } 00:10:08.981 15:08:38 -- target/tls.sh@36 -- # killprocess 64981 00:10:08.981 15:08:38 -- common/autotest_common.sh@936 -- # '[' -z 64981 ']' 00:10:08.981 15:08:38 -- common/autotest_common.sh@940 -- # kill -0 64981 00:10:08.981 15:08:38 -- common/autotest_common.sh@941 -- # uname 00:10:08.981 15:08:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.981 15:08:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64981 00:10:08.981 killing process with pid 64981 00:10:08.981 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.981 00:10:08.981 Latency(us) 00:10:08.981 [2024-11-06T15:08:38.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.981 [2024-11-06T15:08:38.256Z] =================================================================================================================== 00:10:08.981 [2024-11-06T15:08:38.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:08.981 15:08:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:08.981 15:08:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:08.981 15:08:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64981' 00:10:08.981 15:08:38 -- common/autotest_common.sh@955 -- # kill 64981 00:10:08.981 15:08:38 -- common/autotest_common.sh@960 -- # wait 64981 00:10:09.241 15:08:38 -- target/tls.sh@37 -- # return 1 00:10:09.241 15:08:38 -- common/autotest_common.sh@653 -- # es=1 00:10:09.241 15:08:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:09.241 15:08:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:09.241 15:08:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:09.241 15:08:38 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:09.241 15:08:38 -- common/autotest_common.sh@650 -- # local es=0 00:10:09.241 15:08:38 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:09.241 15:08:38 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:09.241 15:08:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.241 15:08:38 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:09.241 15:08:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.241 15:08:38 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:09.241 15:08:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:09.241 15:08:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:09.241 15:08:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:09.241 15:08:38 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:09.241 15:08:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:09.241 15:08:38 -- target/tls.sh@28 -- # bdevperf_pid=65009 00:10:09.241 15:08:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:09.241 15:08:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:09.241 15:08:38 -- target/tls.sh@31 -- # waitforlisten 65009 /var/tmp/bdevperf.sock 00:10:09.241 15:08:38 -- common/autotest_common.sh@829 -- # '[' -z 65009 ']' 00:10:09.241 15:08:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.241 15:08:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.241 15:08:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.241 15:08:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.241 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 [2024-11-06 15:08:38.461921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.241 [2024-11-06 15:08:38.462007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65009 ] 00:10:09.500 [2024-11-06 15:08:38.592228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.500 [2024-11-06 15:08:38.644113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.437 15:08:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.437 15:08:39 -- common/autotest_common.sh@862 -- # return 0 00:10:10.437 15:08:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:10.437 [2024-11-06 15:08:39.666962] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:10.437 [2024-11-06 15:08:39.671672] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:10.437 [2024-11-06 15:08:39.671733] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:10.437 [2024-11-06 15:08:39.671781] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:10.437 [2024-11-06 15:08:39.672384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fd650 (107): Transport endpoint is not connected 00:10:10.437 [2024-11-06 15:08:39.673369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fd650 (9): Bad file descriptor 00:10:10.437 [2024-11-06 15:08:39.674365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:10.437 [2024-11-06 15:08:39.674421] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:10.437 [2024-11-06 15:08:39.674431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:10.437 request: 00:10:10.437 { 00:10:10.437 "name": "TLSTEST", 00:10:10.437 "trtype": "tcp", 00:10:10.437 "traddr": "10.0.0.2", 00:10:10.437 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:10.437 "adrfam": "ipv4", 00:10:10.437 "trsvcid": "4420", 00:10:10.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.437 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:10.437 "method": "bdev_nvme_attach_controller", 00:10:10.437 "req_id": 1 00:10:10.437 } 00:10:10.437 Got JSON-RPC error response 00:10:10.437 response: 00:10:10.437 { 00:10:10.437 "code": -32602, 00:10:10.437 "message": "Invalid parameters" 00:10:10.437 } 00:10:10.437 15:08:39 -- target/tls.sh@36 -- # killprocess 65009 00:10:10.437 15:08:39 -- common/autotest_common.sh@936 -- # '[' -z 65009 ']' 00:10:10.437 15:08:39 -- common/autotest_common.sh@940 -- # kill -0 65009 00:10:10.437 15:08:39 -- common/autotest_common.sh@941 -- # uname 00:10:10.437 15:08:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:10.437 15:08:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65009 00:10:10.697 15:08:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:10.697 15:08:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:10.697 killing process with pid 65009 00:10:10.697 15:08:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65009' 00:10:10.697 Received shutdown signal, test time was about 10.000000 seconds 00:10:10.697 00:10:10.697 Latency(us) 00:10:10.697 [2024-11-06T15:08:39.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.697 [2024-11-06T15:08:39.972Z] =================================================================================================================== 00:10:10.697 [2024-11-06T15:08:39.972Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:10.697 15:08:39 -- common/autotest_common.sh@955 -- # kill 65009 00:10:10.697 15:08:39 -- common/autotest_common.sh@960 -- # wait 65009 00:10:10.697 15:08:39 -- target/tls.sh@37 -- # return 1 00:10:10.697 15:08:39 -- common/autotest_common.sh@653 -- # es=1 00:10:10.697 15:08:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.697 15:08:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:10.697 15:08:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.697 15:08:39 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:10.697 15:08:39 -- common/autotest_common.sh@650 -- # local es=0 00:10:10.697 15:08:39 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:10.697 15:08:39 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:10.697 15:08:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.697 15:08:39 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:10.697 15:08:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.697 15:08:39 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:10.697 15:08:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:10.697 15:08:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:10.697 15:08:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:10.697 15:08:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:10.697 15:08:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:10.697 15:08:39 -- target/tls.sh@28 -- # bdevperf_pid=65031 00:10:10.697 15:08:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:10.697 15:08:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:10.697 15:08:39 -- target/tls.sh@31 -- # waitforlisten 65031 /var/tmp/bdevperf.sock 00:10:10.697 15:08:39 -- common/autotest_common.sh@829 -- # '[' -z 65031 ']' 00:10:10.697 15:08:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.697 15:08:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.697 15:08:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.697 15:08:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.697 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 [2024-11-06 15:08:39.944029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:10.697 [2024-11-06 15:08:39.944133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65031 ] 00:10:10.956 [2024-11-06 15:08:40.084008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.956 [2024-11-06 15:08:40.137529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.894 15:08:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.894 15:08:40 -- common/autotest_common.sh@862 -- # return 0 00:10:11.894 15:08:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:11.894 [2024-11-06 15:08:41.135839] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:11.894 [2024-11-06 15:08:41.140705] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:11.894 [2024-11-06 15:08:41.140750] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:11.894 [2024-11-06 15:08:41.140798] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:11.894 [2024-11-06 15:08:41.141424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d650 (107): Transport endpoint is not connected 00:10:11.894 [2024-11-06 15:08:41.142412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d650 (9): Bad file descriptor 00:10:11.894 [2024-11-06 15:08:41.143409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:11.894 [2024-11-06 15:08:41.143437] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:11.894 [2024-11-06 15:08:41.143447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:11.894 request: 00:10:11.894 { 00:10:11.894 "name": "TLSTEST", 00:10:11.894 "trtype": "tcp", 00:10:11.894 "traddr": "10.0.0.2", 00:10:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.894 "adrfam": "ipv4", 00:10:11.894 "trsvcid": "4420", 00:10:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:11.894 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:11.894 "method": "bdev_nvme_attach_controller", 00:10:11.894 "req_id": 1 00:10:11.894 } 00:10:11.894 Got JSON-RPC error response 00:10:11.894 response: 00:10:11.894 { 00:10:11.894 "code": -32602, 00:10:11.894 "message": "Invalid parameters" 00:10:11.894 } 00:10:11.894 15:08:41 -- target/tls.sh@36 -- # killprocess 65031 00:10:11.894 15:08:41 -- common/autotest_common.sh@936 -- # '[' -z 65031 ']' 00:10:11.894 15:08:41 -- common/autotest_common.sh@940 -- # kill -0 65031 00:10:11.894 15:08:41 -- common/autotest_common.sh@941 -- # uname 00:10:11.894 15:08:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.894 15:08:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65031 00:10:12.153 killing process with pid 65031 00:10:12.153 Received shutdown signal, test time was about 10.000000 seconds 00:10:12.153 00:10:12.153 Latency(us) 00:10:12.153 [2024-11-06T15:08:41.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.153 [2024-11-06T15:08:41.428Z] =================================================================================================================== 00:10:12.153 [2024-11-06T15:08:41.428Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:12.153 15:08:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:12.153 15:08:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:12.153 15:08:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65031' 00:10:12.153 15:08:41 -- common/autotest_common.sh@955 -- # kill 65031 00:10:12.153 15:08:41 -- common/autotest_common.sh@960 -- # wait 65031 00:10:12.153 15:08:41 -- target/tls.sh@37 -- # return 1 00:10:12.153 15:08:41 -- common/autotest_common.sh@653 -- # es=1 00:10:12.153 15:08:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.153 15:08:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:12.153 15:08:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.153 15:08:41 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:12.153 15:08:41 -- common/autotest_common.sh@650 -- # local es=0 00:10:12.153 15:08:41 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:12.153 15:08:41 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:12.154 15:08:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.154 15:08:41 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:12.154 15:08:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.154 15:08:41 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:12.154 15:08:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:12.154 15:08:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:12.154 15:08:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:12.154 15:08:41 -- target/tls.sh@23 -- # psk= 00:10:12.154 15:08:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:12.154 15:08:41 -- target/tls.sh@28 -- # bdevperf_pid=65064 00:10:12.154 15:08:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:12.154 15:08:41 -- target/tls.sh@31 -- # waitforlisten 65064 /var/tmp/bdevperf.sock 00:10:12.154 15:08:41 -- common/autotest_common.sh@829 -- # '[' -z 65064 ']' 00:10:12.154 15:08:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:12.154 15:08:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:12.154 15:08:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:12.154 15:08:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:12.154 15:08:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.154 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:10:12.154 [2024-11-06 15:08:41.418671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.154 [2024-11-06 15:08:41.419272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65064 ] 00:10:12.412 [2024-11-06 15:08:41.553453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.412 [2024-11-06 15:08:41.606421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.349 15:08:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.349 15:08:42 -- common/autotest_common.sh@862 -- # return 0 00:10:13.349 15:08:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:13.608 [2024-11-06 15:08:42.631319] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:13.608 [2024-11-06 15:08:42.633239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96010 (9): Bad file descriptor 00:10:13.608 [2024-11-06 15:08:42.634235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:13.608 [2024-11-06 15:08:42.634278] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:13.608 [2024-11-06 15:08:42.634290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:13.608 request: 00:10:13.608 { 00:10:13.608 "name": "TLSTEST", 00:10:13.608 "trtype": "tcp", 00:10:13.608 "traddr": "10.0.0.2", 00:10:13.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.608 "adrfam": "ipv4", 00:10:13.608 "trsvcid": "4420", 00:10:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.608 "method": "bdev_nvme_attach_controller", 00:10:13.608 "req_id": 1 00:10:13.608 } 00:10:13.608 Got JSON-RPC error response 00:10:13.608 response: 00:10:13.608 { 00:10:13.608 "code": -32602, 00:10:13.608 "message": "Invalid parameters" 00:10:13.608 } 00:10:13.608 15:08:42 -- target/tls.sh@36 -- # killprocess 65064 00:10:13.608 15:08:42 -- common/autotest_common.sh@936 -- # '[' -z 65064 ']' 00:10:13.608 15:08:42 -- common/autotest_common.sh@940 -- # kill -0 65064 00:10:13.608 15:08:42 -- common/autotest_common.sh@941 -- # uname 00:10:13.608 15:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.608 15:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65064 00:10:13.608 15:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:13.608 15:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:13.608 killing process with pid 65064 00:10:13.608 15:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65064' 00:10:13.608 15:08:42 -- common/autotest_common.sh@955 -- # kill 65064 00:10:13.608 Received shutdown signal, test time was about 10.000000 seconds 00:10:13.608 00:10:13.608 Latency(us) 00:10:13.608 [2024-11-06T15:08:42.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.608 [2024-11-06T15:08:42.883Z] =================================================================================================================== 00:10:13.608 [2024-11-06T15:08:42.883Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:13.608 15:08:42 -- common/autotest_common.sh@960 -- # wait 65064 00:10:13.608 15:08:42 -- target/tls.sh@37 -- # return 1 00:10:13.608 15:08:42 -- common/autotest_common.sh@653 -- # es=1 00:10:13.608 15:08:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.608 15:08:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.608 15:08:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.608 15:08:42 -- target/tls.sh@167 -- # killprocess 64600 00:10:13.608 15:08:42 -- common/autotest_common.sh@936 -- # '[' -z 64600 ']' 00:10:13.608 15:08:42 -- common/autotest_common.sh@940 -- # kill -0 64600 00:10:13.608 15:08:42 -- common/autotest_common.sh@941 -- # uname 00:10:13.608 15:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.608 15:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64600 00:10:13.867 15:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:13.867 15:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:13.867 killing process with pid 64600 00:10:13.867 15:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64600' 00:10:13.867 15:08:42 -- common/autotest_common.sh@955 -- # kill 64600 00:10:13.867 15:08:42 -- common/autotest_common.sh@960 -- # wait 64600 00:10:13.867 15:08:43 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:10:13.867 15:08:43 -- target/tls.sh@49 -- # local key hash crc 00:10:13.867 15:08:43 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:13.867 15:08:43 -- target/tls.sh@51 -- # hash=02 00:10:13.867 15:08:43 -- target/tls.sh@52 -- # gzip -1 -c 00:10:13.867 15:08:43 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:10:13.867 15:08:43 -- target/tls.sh@52 -- # tail -c8 00:10:13.867 15:08:43 -- target/tls.sh@52 -- # head -c 4 00:10:13.867 15:08:43 -- target/tls.sh@52 -- # crc='�e�'\''' 00:10:13.867 15:08:43 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:13.867 15:08:43 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:10:13.867 15:08:43 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:13.867 15:08:43 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:13.867 15:08:43 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:13.867 15:08:43 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:13.867 15:08:43 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:13.867 15:08:43 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:10:13.867 15:08:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:13.867 15:08:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.867 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:10:13.867 15:08:43 -- nvmf/common.sh@469 -- # nvmfpid=65101 00:10:13.867 15:08:43 -- nvmf/common.sh@470 -- # waitforlisten 65101 00:10:13.867 15:08:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.867 15:08:43 -- common/autotest_common.sh@829 -- # '[' -z 65101 ']' 00:10:13.867 15:08:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.867 15:08:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.867 15:08:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.868 15:08:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.868 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:10:14.126 [2024-11-06 15:08:43.151039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:14.126 [2024-11-06 15:08:43.151161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.126 [2024-11-06 15:08:43.282033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.126 [2024-11-06 15:08:43.338158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:14.126 [2024-11-06 15:08:43.338496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.126 [2024-11-06 15:08:43.338610] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.126 [2024-11-06 15:08:43.338741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.126 [2024-11-06 15:08:43.338843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.062 15:08:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.062 15:08:44 -- common/autotest_common.sh@862 -- # return 0 00:10:15.062 15:08:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:15.062 15:08:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.062 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:10:15.062 15:08:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.062 15:08:44 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:15.062 15:08:44 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:15.062 15:08:44 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:15.321 [2024-11-06 15:08:44.396086] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.321 15:08:44 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:15.581 15:08:44 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:15.839 [2024-11-06 15:08:44.956232] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:15.839 [2024-11-06 15:08:44.956481] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.839 15:08:44 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:16.098 malloc0 00:10:16.098 15:08:45 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:16.356 15:08:45 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:16.614 15:08:45 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:16.614 15:08:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:16.614 15:08:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:16.614 15:08:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:16.614 15:08:45 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:16.614 15:08:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:16.614 15:08:45 -- target/tls.sh@28 -- # bdevperf_pid=65161 00:10:16.614 15:08:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:16.614 15:08:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:16.614 15:08:45 -- target/tls.sh@31 -- # waitforlisten 65161 /var/tmp/bdevperf.sock 00:10:16.614 15:08:45 -- common/autotest_common.sh@829 -- # '[' -z 65161 ']' 00:10:16.614 15:08:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:16.614 15:08:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:16.615 15:08:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:16.615 15:08:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.615 15:08:45 -- common/autotest_common.sh@10 -- # set +x 00:10:16.615 [2024-11-06 15:08:45.745239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.615 [2024-11-06 15:08:45.745347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65161 ] 00:10:16.615 [2024-11-06 15:08:45.877020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.873 [2024-11-06 15:08:45.929632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.440 15:08:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.440 15:08:46 -- common/autotest_common.sh@862 -- # return 0 00:10:17.440 15:08:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:17.699 [2024-11-06 15:08:46.931514] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:17.958 TLSTESTn1 00:10:17.958 15:08:47 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:17.958 Running I/O for 10 seconds... 00:10:27.935 00:10:27.935 Latency(us) 00:10:27.935 [2024-11-06T15:08:57.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.935 [2024-11-06T15:08:57.210Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:27.935 Verification LBA range: start 0x0 length 0x2000 00:10:27.935 TLSTESTn1 : 10.01 5864.87 22.91 0.00 0.00 21789.69 4557.73 19660.80 00:10:27.935 [2024-11-06T15:08:57.210Z] =================================================================================================================== 00:10:27.935 [2024-11-06T15:08:57.210Z] Total : 5864.87 22.91 0.00 0.00 21789.69 4557.73 19660.80 00:10:27.935 0 00:10:27.935 15:08:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.935 15:08:57 -- target/tls.sh@45 -- # killprocess 65161 00:10:27.935 15:08:57 -- common/autotest_common.sh@936 -- # '[' -z 65161 ']' 00:10:27.935 15:08:57 -- common/autotest_common.sh@940 -- # kill -0 65161 00:10:27.935 15:08:57 -- common/autotest_common.sh@941 -- # uname 00:10:27.935 15:08:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.935 15:08:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65161 00:10:27.935 15:08:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:27.935 15:08:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:27.935 killing process with pid 65161 00:10:27.935 15:08:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65161' 00:10:27.935 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.935 00:10:27.935 Latency(us) 00:10:27.935 [2024-11-06T15:08:57.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.935 [2024-11-06T15:08:57.210Z] =================================================================================================================== 00:10:27.935 [2024-11-06T15:08:57.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.935 15:08:57 -- common/autotest_common.sh@955 -- # kill 65161 00:10:27.935 15:08:57 -- common/autotest_common.sh@960 -- # wait 65161 00:10:28.194 15:08:57 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:28.194 15:08:57 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:28.194 15:08:57 -- common/autotest_common.sh@650 -- # local es=0 00:10:28.194 15:08:57 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:28.194 15:08:57 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:28.194 15:08:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.194 15:08:57 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:28.194 15:08:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.194 15:08:57 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:28.194 15:08:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:28.194 15:08:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:28.194 15:08:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:28.194 15:08:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:28.194 15:08:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:28.194 15:08:57 -- target/tls.sh@28 -- # bdevperf_pid=65290 00:10:28.194 15:08:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:28.194 15:08:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:28.194 15:08:57 -- target/tls.sh@31 -- # waitforlisten 65290 /var/tmp/bdevperf.sock 00:10:28.194 15:08:57 -- common/autotest_common.sh@829 -- # '[' -z 65290 ']' 00:10:28.194 15:08:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.194 15:08:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.194 15:08:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.194 15:08:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.194 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:10:28.194 [2024-11-06 15:08:57.434723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.194 [2024-11-06 15:08:57.434825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65290 ] 00:10:28.453 [2024-11-06 15:08:57.571888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.453 [2024-11-06 15:08:57.630799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.388 15:08:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.388 15:08:58 -- common/autotest_common.sh@862 -- # return 0 00:10:29.388 15:08:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:29.647 [2024-11-06 15:08:58.688049] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:29.647 [2024-11-06 15:08:58.688127] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:10:29.647 request: 00:10:29.647 { 00:10:29.647 "name": "TLSTEST", 00:10:29.647 "trtype": "tcp", 00:10:29.647 "traddr": "10.0.0.2", 00:10:29.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.647 "adrfam": "ipv4", 00:10:29.647 "trsvcid": "4420", 00:10:29.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.647 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:29.647 "method": "bdev_nvme_attach_controller", 00:10:29.647 "req_id": 1 00:10:29.647 } 00:10:29.647 Got JSON-RPC error response 00:10:29.647 response: 00:10:29.647 { 00:10:29.647 "code": -22, 00:10:29.647 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:29.647 } 00:10:29.647 15:08:58 -- target/tls.sh@36 -- # killprocess 65290 00:10:29.647 15:08:58 -- common/autotest_common.sh@936 -- # '[' -z 65290 ']' 00:10:29.647 15:08:58 -- common/autotest_common.sh@940 -- # kill -0 65290 00:10:29.647 15:08:58 -- common/autotest_common.sh@941 -- # uname 00:10:29.647 15:08:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:29.647 15:08:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65290 00:10:29.647 15:08:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:29.647 15:08:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:29.647 killing process with pid 65290 00:10:29.647 15:08:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65290' 00:10:29.647 15:08:58 -- common/autotest_common.sh@955 -- # kill 65290 00:10:29.647 Received shutdown signal, test time was about 10.000000 seconds 00:10:29.647 00:10:29.647 Latency(us) 00:10:29.647 [2024-11-06T15:08:58.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.647 [2024-11-06T15:08:58.922Z] =================================================================================================================== 00:10:29.647 [2024-11-06T15:08:58.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:29.647 15:08:58 -- common/autotest_common.sh@960 -- # wait 65290 00:10:29.647 15:08:58 -- target/tls.sh@37 -- # return 1 00:10:29.647 15:08:58 -- common/autotest_common.sh@653 -- # es=1 00:10:29.647 15:08:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:29.647 15:08:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:29.647 15:08:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:29.647 15:08:58 -- target/tls.sh@183 -- # killprocess 65101 00:10:29.647 15:08:58 -- common/autotest_common.sh@936 -- # '[' -z 65101 ']' 00:10:29.647 15:08:58 -- common/autotest_common.sh@940 -- # kill -0 65101 00:10:29.647 15:08:58 -- common/autotest_common.sh@941 -- # uname 00:10:29.647 15:08:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:29.647 15:08:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65101 00:10:29.906 15:08:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:29.906 15:08:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:29.906 killing process with pid 65101 00:10:29.906 15:08:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65101' 00:10:29.906 15:08:58 -- common/autotest_common.sh@955 -- # kill 65101 00:10:29.906 15:08:58 -- common/autotest_common.sh@960 -- # wait 65101 00:10:29.906 15:08:59 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:10:29.906 15:08:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:29.906 15:08:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.906 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:10:29.906 15:08:59 -- nvmf/common.sh@469 -- # nvmfpid=65328 00:10:29.906 15:08:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:29.906 15:08:59 -- nvmf/common.sh@470 -- # waitforlisten 65328 00:10:29.906 15:08:59 -- common/autotest_common.sh@829 -- # '[' -z 65328 ']' 00:10:29.906 15:08:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.906 15:08:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.906 15:08:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.906 15:08:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.906 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:10:30.165 [2024-11-06 15:08:59.198933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.165 [2024-11-06 15:08:59.199034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.165 [2024-11-06 15:08:59.336720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.165 [2024-11-06 15:08:59.391224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:30.165 [2024-11-06 15:08:59.391391] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.165 [2024-11-06 15:08:59.391404] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.165 [2024-11-06 15:08:59.391413] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.165 [2024-11-06 15:08:59.391444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.101 15:09:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.101 15:09:00 -- common/autotest_common.sh@862 -- # return 0 00:10:31.101 15:09:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:31.101 15:09:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:31.101 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:10:31.101 15:09:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.101 15:09:00 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:31.101 15:09:00 -- common/autotest_common.sh@650 -- # local es=0 00:10:31.101 15:09:00 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:31.101 15:09:00 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:10:31.101 15:09:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.101 15:09:00 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:10:31.101 15:09:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.101 15:09:00 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:31.101 15:09:00 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:31.101 15:09:00 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:31.359 [2024-11-06 15:09:00.388283] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.360 15:09:00 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:31.360 15:09:00 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:31.618 [2024-11-06 15:09:00.880404] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:31.618 [2024-11-06 15:09:00.880642] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.876 15:09:00 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:32.150 malloc0 00:10:32.150 15:09:01 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:32.150 15:09:01 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:32.424 [2024-11-06 15:09:01.619476] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:10:32.424 [2024-11-06 15:09:01.619526] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:10:32.424 [2024-11-06 15:09:01.619547] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:10:32.424 request: 00:10:32.424 { 00:10:32.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.424 "host": "nqn.2016-06.io.spdk:host1", 00:10:32.424 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:32.424 "method": "nvmf_subsystem_add_host", 00:10:32.424 "req_id": 1 00:10:32.424 } 00:10:32.424 Got JSON-RPC error response 00:10:32.424 response: 00:10:32.424 { 00:10:32.424 "code": -32603, 00:10:32.424 "message": "Internal error" 00:10:32.424 } 00:10:32.424 15:09:01 -- common/autotest_common.sh@653 -- # es=1 00:10:32.424 15:09:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:32.424 15:09:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:32.424 15:09:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:32.424 15:09:01 -- target/tls.sh@189 -- # killprocess 65328 00:10:32.424 15:09:01 -- common/autotest_common.sh@936 -- # '[' -z 65328 ']' 00:10:32.424 15:09:01 -- common/autotest_common.sh@940 -- # kill -0 65328 00:10:32.424 15:09:01 -- common/autotest_common.sh@941 -- # uname 00:10:32.424 15:09:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:32.424 15:09:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65328 00:10:32.424 killing process with pid 65328 00:10:32.424 15:09:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:32.424 15:09:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:32.424 15:09:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65328' 00:10:32.424 15:09:01 -- common/autotest_common.sh@955 -- # kill 65328 00:10:32.424 15:09:01 -- common/autotest_common.sh@960 -- # wait 65328 00:10:32.683 15:09:01 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:32.683 15:09:01 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:10:32.683 15:09:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:32.683 15:09:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.683 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:10:32.683 15:09:01 -- nvmf/common.sh@469 -- # nvmfpid=65391 00:10:32.683 15:09:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:32.683 15:09:01 -- nvmf/common.sh@470 -- # waitforlisten 65391 00:10:32.683 15:09:01 -- common/autotest_common.sh@829 -- # '[' -z 65391 ']' 00:10:32.683 15:09:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.683 15:09:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.683 15:09:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.683 15:09:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.683 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:10:32.683 [2024-11-06 15:09:01.914488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.683 [2024-11-06 15:09:01.914578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.942 [2024-11-06 15:09:02.052263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.942 [2024-11-06 15:09:02.107223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:32.942 [2024-11-06 15:09:02.107405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.942 [2024-11-06 15:09:02.107420] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.942 [2024-11-06 15:09:02.107429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.942 [2024-11-06 15:09:02.107461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.883 15:09:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.883 15:09:02 -- common/autotest_common.sh@862 -- # return 0 00:10:33.883 15:09:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:33.883 15:09:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.883 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:10:33.883 15:09:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.883 15:09:02 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:33.883 15:09:02 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:33.883 15:09:02 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:34.142 [2024-11-06 15:09:03.167342] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.142 15:09:03 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:34.401 15:09:03 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:34.660 [2024-11-06 15:09:03.687468] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:34.660 [2024-11-06 15:09:03.687780] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.660 15:09:03 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:34.919 malloc0 00:10:34.919 15:09:03 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:35.177 15:09:04 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:35.436 15:09:04 -- target/tls.sh@197 -- # bdevperf_pid=65445 00:10:35.436 15:09:04 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:35.436 15:09:04 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:35.436 15:09:04 -- target/tls.sh@200 -- # waitforlisten 65445 /var/tmp/bdevperf.sock 00:10:35.436 15:09:04 -- common/autotest_common.sh@829 -- # '[' -z 65445 ']' 00:10:35.436 15:09:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:35.436 15:09:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.436 15:09:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:35.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:35.436 15:09:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.436 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 [2024-11-06 15:09:04.525231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.436 [2024-11-06 15:09:04.525719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65445 ] 00:10:35.436 [2024-11-06 15:09:04.659089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.695 [2024-11-06 15:09:04.714067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.263 15:09:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.263 15:09:05 -- common/autotest_common.sh@862 -- # return 0 00:10:36.263 15:09:05 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:36.522 [2024-11-06 15:09:05.706809] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:36.522 TLSTESTn1 00:10:36.522 15:09:05 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:37.091 15:09:06 -- target/tls.sh@205 -- # tgtconf='{ 00:10:37.091 "subsystems": [ 00:10:37.091 { 00:10:37.091 "subsystem": "iobuf", 00:10:37.091 "config": [ 00:10:37.091 { 00:10:37.091 "method": "iobuf_set_options", 00:10:37.091 "params": { 00:10:37.091 "small_pool_count": 8192, 00:10:37.091 "large_pool_count": 1024, 00:10:37.091 "small_bufsize": 8192, 00:10:37.091 "large_bufsize": 135168 00:10:37.091 } 00:10:37.091 } 00:10:37.091 ] 00:10:37.091 }, 00:10:37.091 { 00:10:37.091 "subsystem": "sock", 00:10:37.091 "config": [ 00:10:37.091 { 00:10:37.091 "method": "sock_impl_set_options", 00:10:37.091 "params": { 00:10:37.091 "impl_name": "uring", 00:10:37.091 "recv_buf_size": 2097152, 00:10:37.091 "send_buf_size": 2097152, 00:10:37.091 "enable_recv_pipe": true, 00:10:37.092 "enable_quickack": false, 00:10:37.092 "enable_placement_id": 0, 00:10:37.092 "enable_zerocopy_send_server": false, 00:10:37.092 "enable_zerocopy_send_client": false, 00:10:37.092 "zerocopy_threshold": 0, 00:10:37.092 "tls_version": 0, 00:10:37.092 "enable_ktls": false 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "sock_impl_set_options", 00:10:37.092 "params": { 00:10:37.092 "impl_name": "posix", 00:10:37.092 "recv_buf_size": 2097152, 00:10:37.092 "send_buf_size": 2097152, 00:10:37.092 "enable_recv_pipe": true, 00:10:37.092 "enable_quickack": false, 00:10:37.092 "enable_placement_id": 0, 00:10:37.092 "enable_zerocopy_send_server": true, 00:10:37.092 "enable_zerocopy_send_client": false, 00:10:37.092 "zerocopy_threshold": 0, 00:10:37.092 "tls_version": 0, 00:10:37.092 "enable_ktls": false 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "sock_impl_set_options", 00:10:37.092 "params": { 00:10:37.092 "impl_name": "ssl", 00:10:37.092 "recv_buf_size": 4096, 00:10:37.092 "send_buf_size": 4096, 00:10:37.092 "enable_recv_pipe": true, 00:10:37.092 "enable_quickack": false, 00:10:37.092 "enable_placement_id": 0, 00:10:37.092 "enable_zerocopy_send_server": true, 00:10:37.092 "enable_zerocopy_send_client": false, 00:10:37.092 "zerocopy_threshold": 0, 00:10:37.092 "tls_version": 0, 00:10:37.092 "enable_ktls": false 00:10:37.092 } 00:10:37.092 } 00:10:37.092 ] 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "subsystem": "vmd", 00:10:37.092 "config": [] 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "subsystem": "accel", 00:10:37.092 "config": [ 00:10:37.092 { 00:10:37.092 "method": "accel_set_options", 00:10:37.092 "params": { 00:10:37.092 "small_cache_size": 128, 00:10:37.092 "large_cache_size": 16, 00:10:37.092 "task_count": 2048, 00:10:37.092 "sequence_count": 2048, 00:10:37.092 "buf_count": 2048 00:10:37.092 } 00:10:37.092 } 00:10:37.092 ] 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "subsystem": "bdev", 00:10:37.092 "config": [ 00:10:37.092 { 00:10:37.092 "method": "bdev_set_options", 00:10:37.092 "params": { 00:10:37.092 "bdev_io_pool_size": 65535, 00:10:37.092 "bdev_io_cache_size": 256, 00:10:37.092 "bdev_auto_examine": true, 00:10:37.092 "iobuf_small_cache_size": 128, 00:10:37.092 "iobuf_large_cache_size": 16 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "bdev_raid_set_options", 00:10:37.092 "params": { 00:10:37.092 "process_window_size_kb": 1024 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "bdev_iscsi_set_options", 00:10:37.092 "params": { 00:10:37.092 "timeout_sec": 30 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "bdev_nvme_set_options", 00:10:37.092 "params": { 00:10:37.092 "action_on_timeout": "none", 00:10:37.092 "timeout_us": 0, 00:10:37.092 "timeout_admin_us": 0, 00:10:37.092 "keep_alive_timeout_ms": 10000, 00:10:37.092 "transport_retry_count": 4, 00:10:37.092 "arbitration_burst": 0, 00:10:37.092 "low_priority_weight": 0, 00:10:37.092 "medium_priority_weight": 0, 00:10:37.092 "high_priority_weight": 0, 00:10:37.092 "nvme_adminq_poll_period_us": 10000, 00:10:37.092 "nvme_ioq_poll_period_us": 0, 00:10:37.092 "io_queue_requests": 0, 00:10:37.092 "delay_cmd_submit": true, 00:10:37.092 "bdev_retry_count": 3, 00:10:37.092 "transport_ack_timeout": 0, 00:10:37.092 "ctrlr_loss_timeout_sec": 0, 00:10:37.092 "reconnect_delay_sec": 0, 00:10:37.092 "fast_io_fail_timeout_sec": 0, 00:10:37.092 "generate_uuids": false, 00:10:37.092 "transport_tos": 0, 00:10:37.092 "io_path_stat": false, 00:10:37.092 "allow_accel_sequence": false 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "bdev_nvme_set_hotplug", 00:10:37.092 "params": { 00:10:37.092 "period_us": 100000, 00:10:37.092 "enable": false 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "bdev_malloc_create", 00:10:37.092 "params": { 00:10:37.092 "name": "malloc0", 00:10:37.092 "num_blocks": 8192, 00:10:37.092 "block_size": 4096, 00:10:37.092 "physical_block_size": 4096, 00:10:37.092 "uuid": "05d3f893-5556-4c15-baa7-f6873f31fb50", 00:10:37.092 "optimal_io_boundary": 0 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "bdev_wait_for_examine" 00:10:37.092 } 00:10:37.092 ] 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "subsystem": "nbd", 00:10:37.092 "config": [] 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "subsystem": "scheduler", 00:10:37.092 "config": [ 00:10:37.092 { 00:10:37.092 "method": "framework_set_scheduler", 00:10:37.092 "params": { 00:10:37.092 "name": "static" 00:10:37.092 } 00:10:37.092 } 00:10:37.092 ] 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "subsystem": "nvmf", 00:10:37.092 "config": [ 00:10:37.092 { 00:10:37.092 "method": "nvmf_set_config", 00:10:37.092 "params": { 00:10:37.092 "discovery_filter": "match_any", 00:10:37.092 "admin_cmd_passthru": { 00:10:37.092 "identify_ctrlr": false 00:10:37.092 } 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_set_max_subsystems", 00:10:37.092 "params": { 00:10:37.092 "max_subsystems": 1024 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_set_crdt", 00:10:37.092 "params": { 00:10:37.092 "crdt1": 0, 00:10:37.092 "crdt2": 0, 00:10:37.092 "crdt3": 0 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_create_transport", 00:10:37.092 "params": { 00:10:37.092 "trtype": "TCP", 00:10:37.092 "max_queue_depth": 128, 00:10:37.092 "max_io_qpairs_per_ctrlr": 127, 00:10:37.092 "in_capsule_data_size": 4096, 00:10:37.092 "max_io_size": 131072, 00:10:37.092 "io_unit_size": 131072, 00:10:37.092 "max_aq_depth": 128, 00:10:37.092 "num_shared_buffers": 511, 00:10:37.092 "buf_cache_size": 4294967295, 00:10:37.092 "dif_insert_or_strip": false, 00:10:37.092 "zcopy": false, 00:10:37.092 "c2h_success": false, 00:10:37.092 "sock_priority": 0, 00:10:37.092 "abort_timeout_sec": 1 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_create_subsystem", 00:10:37.092 "params": { 00:10:37.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.092 "allow_any_host": false, 00:10:37.092 "serial_number": "SPDK00000000000001", 00:10:37.092 "model_number": "SPDK bdev Controller", 00:10:37.092 "max_namespaces": 10, 00:10:37.092 "min_cntlid": 1, 00:10:37.092 "max_cntlid": 65519, 00:10:37.092 "ana_reporting": false 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_subsystem_add_host", 00:10:37.092 "params": { 00:10:37.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.092 "host": "nqn.2016-06.io.spdk:host1", 00:10:37.092 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_subsystem_add_ns", 00:10:37.092 "params": { 00:10:37.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.092 "namespace": { 00:10:37.092 "nsid": 1, 00:10:37.092 "bdev_name": "malloc0", 00:10:37.092 "nguid": "05D3F89355564C15BAA7F6873F31FB50", 00:10:37.092 "uuid": "05d3f893-5556-4c15-baa7-f6873f31fb50" 00:10:37.092 } 00:10:37.092 } 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "method": "nvmf_subsystem_add_listener", 00:10:37.092 "params": { 00:10:37.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.092 "listen_address": { 00:10:37.092 "trtype": "TCP", 00:10:37.093 "adrfam": "IPv4", 00:10:37.093 "traddr": "10.0.0.2", 00:10:37.093 "trsvcid": "4420" 00:10:37.093 }, 00:10:37.093 "secure_channel": true 00:10:37.093 } 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 }' 00:10:37.093 15:09:06 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:10:37.093 15:09:06 -- target/tls.sh@206 -- # bdevperfconf='{ 00:10:37.093 "subsystems": [ 00:10:37.093 { 00:10:37.093 "subsystem": "iobuf", 00:10:37.093 "config": [ 00:10:37.093 { 00:10:37.093 "method": "iobuf_set_options", 00:10:37.093 "params": { 00:10:37.093 "small_pool_count": 8192, 00:10:37.093 "large_pool_count": 1024, 00:10:37.093 "small_bufsize": 8192, 00:10:37.093 "large_bufsize": 135168 00:10:37.093 } 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "subsystem": "sock", 00:10:37.093 "config": [ 00:10:37.093 { 00:10:37.093 "method": "sock_impl_set_options", 00:10:37.093 "params": { 00:10:37.093 "impl_name": "uring", 00:10:37.093 "recv_buf_size": 2097152, 00:10:37.093 "send_buf_size": 2097152, 00:10:37.093 "enable_recv_pipe": true, 00:10:37.093 "enable_quickack": false, 00:10:37.093 "enable_placement_id": 0, 00:10:37.093 "enable_zerocopy_send_server": false, 00:10:37.093 "enable_zerocopy_send_client": false, 00:10:37.093 "zerocopy_threshold": 0, 00:10:37.093 "tls_version": 0, 00:10:37.093 "enable_ktls": false 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "sock_impl_set_options", 00:10:37.093 "params": { 00:10:37.093 "impl_name": "posix", 00:10:37.093 "recv_buf_size": 2097152, 00:10:37.093 "send_buf_size": 2097152, 00:10:37.093 "enable_recv_pipe": true, 00:10:37.093 "enable_quickack": false, 00:10:37.093 "enable_placement_id": 0, 00:10:37.093 "enable_zerocopy_send_server": true, 00:10:37.093 "enable_zerocopy_send_client": false, 00:10:37.093 "zerocopy_threshold": 0, 00:10:37.093 "tls_version": 0, 00:10:37.093 "enable_ktls": false 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "sock_impl_set_options", 00:10:37.093 "params": { 00:10:37.093 "impl_name": "ssl", 00:10:37.093 "recv_buf_size": 4096, 00:10:37.093 "send_buf_size": 4096, 00:10:37.093 "enable_recv_pipe": true, 00:10:37.093 "enable_quickack": false, 00:10:37.093 "enable_placement_id": 0, 00:10:37.093 "enable_zerocopy_send_server": true, 00:10:37.093 "enable_zerocopy_send_client": false, 00:10:37.093 "zerocopy_threshold": 0, 00:10:37.093 "tls_version": 0, 00:10:37.093 "enable_ktls": false 00:10:37.093 } 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "subsystem": "vmd", 00:10:37.093 "config": [] 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "subsystem": "accel", 00:10:37.093 "config": [ 00:10:37.093 { 00:10:37.093 "method": "accel_set_options", 00:10:37.093 "params": { 00:10:37.093 "small_cache_size": 128, 00:10:37.093 "large_cache_size": 16, 00:10:37.093 "task_count": 2048, 00:10:37.093 "sequence_count": 2048, 00:10:37.093 "buf_count": 2048 00:10:37.093 } 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "subsystem": "bdev", 00:10:37.093 "config": [ 00:10:37.093 { 00:10:37.093 "method": "bdev_set_options", 00:10:37.093 "params": { 00:10:37.093 "bdev_io_pool_size": 65535, 00:10:37.093 "bdev_io_cache_size": 256, 00:10:37.093 "bdev_auto_examine": true, 00:10:37.093 "iobuf_small_cache_size": 128, 00:10:37.093 "iobuf_large_cache_size": 16 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "bdev_raid_set_options", 00:10:37.093 "params": { 00:10:37.093 "process_window_size_kb": 1024 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "bdev_iscsi_set_options", 00:10:37.093 "params": { 00:10:37.093 "timeout_sec": 30 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "bdev_nvme_set_options", 00:10:37.093 "params": { 00:10:37.093 "action_on_timeout": "none", 00:10:37.093 "timeout_us": 0, 00:10:37.093 "timeout_admin_us": 0, 00:10:37.093 "keep_alive_timeout_ms": 10000, 00:10:37.093 "transport_retry_count": 4, 00:10:37.093 "arbitration_burst": 0, 00:10:37.093 "low_priority_weight": 0, 00:10:37.093 "medium_priority_weight": 0, 00:10:37.093 "high_priority_weight": 0, 00:10:37.093 "nvme_adminq_poll_period_us": 10000, 00:10:37.093 "nvme_ioq_poll_period_us": 0, 00:10:37.093 "io_queue_requests": 512, 00:10:37.093 "delay_cmd_submit": true, 00:10:37.093 "bdev_retry_count": 3, 00:10:37.093 "transport_ack_timeout": 0, 00:10:37.093 "ctrlr_loss_timeout_sec": 0, 00:10:37.093 "reconnect_delay_sec": 0, 00:10:37.093 "fast_io_fail_timeout_sec": 0, 00:10:37.093 "generate_uuids": false, 00:10:37.093 "transport_tos": 0, 00:10:37.093 "io_path_stat": false, 00:10:37.093 "allow_accel_sequence": false 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "bdev_nvme_attach_controller", 00:10:37.093 "params": { 00:10:37.093 "name": "TLSTEST", 00:10:37.093 "trtype": "TCP", 00:10:37.093 "adrfam": "IPv4", 00:10:37.093 "traddr": "10.0.0.2", 00:10:37.093 "trsvcid": "4420", 00:10:37.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.093 "prchk_reftag": false, 00:10:37.093 "prchk_guard": false, 00:10:37.093 "ctrlr_loss_timeout_sec": 0, 00:10:37.093 "reconnect_delay_sec": 0, 00:10:37.093 "fast_io_fail_timeout_sec": 0, 00:10:37.093 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:37.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.093 "hdgst": false, 00:10:37.093 "ddgst": false 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "bdev_nvme_set_hotplug", 00:10:37.093 "params": { 00:10:37.093 "period_us": 100000, 00:10:37.093 "enable": false 00:10:37.093 } 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "method": "bdev_wait_for_examine" 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 }, 00:10:37.093 { 00:10:37.093 "subsystem": "nbd", 00:10:37.093 "config": [] 00:10:37.093 } 00:10:37.093 ] 00:10:37.093 }' 00:10:37.093 15:09:06 -- target/tls.sh@208 -- # killprocess 65445 00:10:37.093 15:09:06 -- common/autotest_common.sh@936 -- # '[' -z 65445 ']' 00:10:37.093 15:09:06 -- common/autotest_common.sh@940 -- # kill -0 65445 00:10:37.093 15:09:06 -- common/autotest_common.sh@941 -- # uname 00:10:37.093 15:09:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.093 15:09:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65445 00:10:37.352 killing process with pid 65445 00:10:37.352 Received shutdown signal, test time was about 10.000000 seconds 00:10:37.352 00:10:37.353 Latency(us) 00:10:37.353 [2024-11-06T15:09:06.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.353 [2024-11-06T15:09:06.628Z] =================================================================================================================== 00:10:37.353 [2024-11-06T15:09:06.628Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:37.353 15:09:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:37.353 15:09:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:37.353 15:09:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65445' 00:10:37.353 15:09:06 -- common/autotest_common.sh@955 -- # kill 65445 00:10:37.353 15:09:06 -- common/autotest_common.sh@960 -- # wait 65445 00:10:37.353 15:09:06 -- target/tls.sh@209 -- # killprocess 65391 00:10:37.353 15:09:06 -- common/autotest_common.sh@936 -- # '[' -z 65391 ']' 00:10:37.353 15:09:06 -- common/autotest_common.sh@940 -- # kill -0 65391 00:10:37.353 15:09:06 -- common/autotest_common.sh@941 -- # uname 00:10:37.353 15:09:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.353 15:09:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65391 00:10:37.353 killing process with pid 65391 00:10:37.353 15:09:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:37.353 15:09:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:37.353 15:09:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65391' 00:10:37.353 15:09:06 -- common/autotest_common.sh@955 -- # kill 65391 00:10:37.353 15:09:06 -- common/autotest_common.sh@960 -- # wait 65391 00:10:37.612 15:09:06 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:10:37.612 15:09:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:37.612 15:09:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.612 15:09:06 -- target/tls.sh@212 -- # echo '{ 00:10:37.612 "subsystems": [ 00:10:37.612 { 00:10:37.612 "subsystem": "iobuf", 00:10:37.612 "config": [ 00:10:37.612 { 00:10:37.612 "method": "iobuf_set_options", 00:10:37.612 "params": { 00:10:37.612 "small_pool_count": 8192, 00:10:37.612 "large_pool_count": 1024, 00:10:37.612 "small_bufsize": 8192, 00:10:37.612 "large_bufsize": 135168 00:10:37.612 } 00:10:37.612 } 00:10:37.612 ] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "sock", 00:10:37.612 "config": [ 00:10:37.612 { 00:10:37.612 "method": "sock_impl_set_options", 00:10:37.612 "params": { 00:10:37.612 "impl_name": "uring", 00:10:37.612 "recv_buf_size": 2097152, 00:10:37.612 "send_buf_size": 2097152, 00:10:37.612 "enable_recv_pipe": true, 00:10:37.612 "enable_quickack": false, 00:10:37.612 "enable_placement_id": 0, 00:10:37.612 "enable_zerocopy_send_server": false, 00:10:37.612 "enable_zerocopy_send_client": false, 00:10:37.612 "zerocopy_threshold": 0, 00:10:37.612 "tls_version": 0, 00:10:37.612 "enable_ktls": false 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "sock_impl_set_options", 00:10:37.612 "params": { 00:10:37.612 "impl_name": "posix", 00:10:37.612 "recv_buf_size": 2097152, 00:10:37.612 "send_buf_size": 2097152, 00:10:37.612 "enable_recv_pipe": true, 00:10:37.612 "enable_quickack": false, 00:10:37.612 "enable_placement_id": 0, 00:10:37.612 "enable_zerocopy_send_server": true, 00:10:37.612 "enable_zerocopy_send_client": false, 00:10:37.612 "zerocopy_threshold": 0, 00:10:37.612 "tls_version": 0, 00:10:37.612 "enable_ktls": false 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "sock_impl_set_options", 00:10:37.612 "params": { 00:10:37.612 "impl_name": "ssl", 00:10:37.612 "recv_buf_size": 4096, 00:10:37.612 "send_buf_size": 4096, 00:10:37.612 "enable_recv_pipe": true, 00:10:37.612 "enable_quickack": false, 00:10:37.612 "enable_placement_id": 0, 00:10:37.612 "enable_zerocopy_send_server": true, 00:10:37.612 "enable_zerocopy_send_client": false, 00:10:37.612 "zerocopy_threshold": 0, 00:10:37.612 "tls_version": 0, 00:10:37.612 "enable_ktls": false 00:10:37.612 } 00:10:37.612 } 00:10:37.612 ] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "vmd", 00:10:37.612 "config": [] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "accel", 00:10:37.612 "config": [ 00:10:37.612 { 00:10:37.612 "method": "accel_set_options", 00:10:37.612 "params": { 00:10:37.612 "small_cache_size": 128, 00:10:37.612 "large_cache_size": 16, 00:10:37.612 "task_count": 2048, 00:10:37.612 "sequence_count": 2048, 00:10:37.612 "buf_count": 2048 00:10:37.612 } 00:10:37.612 } 00:10:37.612 ] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "bdev", 00:10:37.612 "config": [ 00:10:37.612 { 00:10:37.612 "method": "bdev_set_options", 00:10:37.612 "params": { 00:10:37.612 "bdev_io_pool_size": 65535, 00:10:37.612 "bdev_io_cache_size": 256, 00:10:37.612 "bdev_auto_examine": true, 00:10:37.612 "iobuf_small_cache_size": 128, 00:10:37.612 "iobuf_large_cache_size": 16 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "bdev_raid_set_options", 00:10:37.612 "params": { 00:10:37.612 "process_window_size_kb": 1024 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "bdev_iscsi_set_options", 00:10:37.612 "params": { 00:10:37.612 "timeout_sec": 30 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "bdev_nvme_set_options", 00:10:37.612 "params": { 00:10:37.612 "action_on_timeout": "none", 00:10:37.612 "timeout_us": 0, 00:10:37.612 "timeout_admin_us": 0, 00:10:37.612 "keep_alive_timeout_ms": 10000, 00:10:37.612 "transport_retry_count": 4, 00:10:37.612 "arbitration_burst": 0, 00:10:37.612 "low_priority_weight": 0, 00:10:37.612 "medium_priority_weight": 0, 00:10:37.612 "high_priority_weight": 0, 00:10:37.612 "nvme_adminq_poll_period_us": 10000, 00:10:37.612 "nvme_ioq_poll_period_us": 0, 00:10:37.612 "io_queue_requests": 0, 00:10:37.612 "delay_cmd_submit": true, 00:10:37.612 "bdev_retry_count": 3, 00:10:37.612 "transport_ack_timeout": 0, 00:10:37.612 "ctrlr_loss_timeout_sec": 0, 00:10:37.612 "reconnect_delay_sec": 0, 00:10:37.612 "fast_io_fail_timeout_sec": 0, 00:10:37.612 "generate_uuids": false, 00:10:37.612 "transport_tos": 0, 00:10:37.612 "io_path_stat": false, 00:10:37.612 "allow_accel_sequence": false 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "bdev_nvme_set_hotplug", 00:10:37.612 "params": { 00:10:37.612 "period_us": 100000, 00:10:37.612 "enable": false 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "bdev_malloc_create", 00:10:37.612 "params": { 00:10:37.612 "name": "malloc0", 00:10:37.612 "num_blocks": 8192, 00:10:37.612 "block_size": 4096, 00:10:37.612 "physical_block_size": 4096, 00:10:37.612 "uuid": "05d3f893-5556-4c15-baa7-f6873f31fb50", 00:10:37.612 "optimal_io_boundary": 0 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "bdev_wait_for_examine" 00:10:37.612 } 00:10:37.612 ] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "nbd", 00:10:37.612 "config": [] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "scheduler", 00:10:37.612 "config": [ 00:10:37.612 { 00:10:37.612 "method": "framework_set_scheduler", 00:10:37.612 "params": { 00:10:37.612 "name": "static" 00:10:37.612 } 00:10:37.612 } 00:10:37.612 ] 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "subsystem": "nvmf", 00:10:37.612 "config": [ 00:10:37.612 { 00:10:37.612 "method": "nvmf_set_config", 00:10:37.612 "params": { 00:10:37.612 "discovery_filter": "match_any", 00:10:37.612 "admin_cmd_passthru": { 00:10:37.612 "identify_ctrlr": false 00:10:37.612 } 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_set_max_subsystems", 00:10:37.612 "params": { 00:10:37.612 "max_subsystems": 1024 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_set_crdt", 00:10:37.612 "params": { 00:10:37.612 "crdt1": 0, 00:10:37.612 "crdt2": 0, 00:10:37.612 "crdt3": 0 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_create_transport", 00:10:37.612 "params": { 00:10:37.612 "trtype": "TCP", 00:10:37.612 "max_queue_depth": 128, 00:10:37.612 "max_io_qpairs_per_ctrlr": 127, 00:10:37.612 "in_capsule_data_size": 4096, 00:10:37.612 "max_io_size": 131072, 00:10:37.612 "io_unit_size": 131072, 00:10:37.612 "max_aq_depth": 128, 00:10:37.612 "num_shared_buffers": 511, 00:10:37.612 "buf_cache_size": 4294967295, 00:10:37.612 "dif_insert_or_strip": false, 00:10:37.612 "zcopy": false, 00:10:37.612 "c2h_success": false, 00:10:37.612 "sock_priority": 0, 00:10:37.612 "abort_timeout_sec": 1 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_create_subsystem", 00:10:37.612 "params": { 00:10:37.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.612 "allow_any_host": false, 00:10:37.612 "serial_number": "SPDK00000000000001", 00:10:37.612 "model_number": "SPDK bdev Controller", 00:10:37.612 "max_namespaces": 10, 00:10:37.612 "min_cntlid": 1, 00:10:37.612 "max_cntlid": 65519, 00:10:37.612 "ana_reporting": false 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_subsystem_add_host", 00:10:37.612 "params": { 00:10:37.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.612 "host": "nqn.2016-06.io.spdk:host1", 00:10:37.612 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_subsystem_add_ns", 00:10:37.612 "params": { 00:10:37.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.612 "namespace": { 00:10:37.612 "nsid": 1, 00:10:37.612 "bdev_name": "malloc0", 00:10:37.612 "nguid": "05D3F89355564C15BAA7F6873F31FB50", 00:10:37.612 "uuid": "05d3f893-5556-4c15-baa7-f6873f31fb50" 00:10:37.612 } 00:10:37.612 } 00:10:37.612 }, 00:10:37.612 { 00:10:37.612 "method": "nvmf_subsystem_add_listener", 00:10:37.612 "params": { 00:10:37.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.612 "listen_address": { 00:10:37.612 "trtype": "TCP", 00:10:37.612 "adrfam": "IPv4", 00:10:37.612 "traddr": "10.0.0.2", 00:10:37.612 "trsvcid": "4420" 00:10:37.612 }, 00:10:37.612 "secure_channel": true 00:10:37.612 } 00:10:37.612 } 00:10:37.612 ] 00:10:37.612 } 00:10:37.612 ] 00:10:37.613 }' 00:10:37.613 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:10:37.613 15:09:06 -- nvmf/common.sh@469 -- # nvmfpid=65494 00:10:37.613 15:09:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:10:37.613 15:09:06 -- nvmf/common.sh@470 -- # waitforlisten 65494 00:10:37.613 15:09:06 -- common/autotest_common.sh@829 -- # '[' -z 65494 ']' 00:10:37.613 15:09:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.613 15:09:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.613 15:09:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.613 15:09:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.613 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:10:37.613 [2024-11-06 15:09:06.846221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:37.613 [2024-11-06 15:09:06.846496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.872 [2024-11-06 15:09:06.979291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.872 [2024-11-06 15:09:07.028309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:37.872 [2024-11-06 15:09:07.028446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.872 [2024-11-06 15:09:07.028459] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.872 [2024-11-06 15:09:07.028465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.872 [2024-11-06 15:09:07.028492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.131 [2024-11-06 15:09:07.206303] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.131 [2024-11-06 15:09:07.238265] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:38.131 [2024-11-06 15:09:07.238645] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.699 15:09:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.699 15:09:07 -- common/autotest_common.sh@862 -- # return 0 00:10:38.699 15:09:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:38.699 15:09:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.699 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:10:38.699 15:09:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.699 15:09:07 -- target/tls.sh@216 -- # bdevperf_pid=65520 00:10:38.699 15:09:07 -- target/tls.sh@217 -- # waitforlisten 65520 /var/tmp/bdevperf.sock 00:10:38.699 15:09:07 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:10:38.699 15:09:07 -- target/tls.sh@213 -- # echo '{ 00:10:38.699 "subsystems": [ 00:10:38.699 { 00:10:38.699 "subsystem": "iobuf", 00:10:38.699 "config": [ 00:10:38.699 { 00:10:38.699 "method": "iobuf_set_options", 00:10:38.699 "params": { 00:10:38.699 "small_pool_count": 8192, 00:10:38.699 "large_pool_count": 1024, 00:10:38.699 "small_bufsize": 8192, 00:10:38.699 "large_bufsize": 135168 00:10:38.699 } 00:10:38.699 } 00:10:38.699 ] 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "subsystem": "sock", 00:10:38.699 "config": [ 00:10:38.699 { 00:10:38.699 "method": "sock_impl_set_options", 00:10:38.699 "params": { 00:10:38.699 "impl_name": "uring", 00:10:38.699 "recv_buf_size": 2097152, 00:10:38.699 "send_buf_size": 2097152, 00:10:38.699 "enable_recv_pipe": true, 00:10:38.699 "enable_quickack": false, 00:10:38.699 "enable_placement_id": 0, 00:10:38.699 "enable_zerocopy_send_server": false, 00:10:38.699 "enable_zerocopy_send_client": false, 00:10:38.699 "zerocopy_threshold": 0, 00:10:38.699 "tls_version": 0, 00:10:38.699 "enable_ktls": false 00:10:38.699 } 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "method": "sock_impl_set_options", 00:10:38.699 "params": { 00:10:38.699 "impl_name": "posix", 00:10:38.699 "recv_buf_size": 2097152, 00:10:38.699 "send_buf_size": 2097152, 00:10:38.699 "enable_recv_pipe": true, 00:10:38.699 "enable_quickack": false, 00:10:38.699 "enable_placement_id": 0, 00:10:38.699 "enable_zerocopy_send_server": true, 00:10:38.699 "enable_zerocopy_send_client": false, 00:10:38.699 "zerocopy_threshold": 0, 00:10:38.699 "tls_version": 0, 00:10:38.699 "enable_ktls": false 00:10:38.699 } 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "method": "sock_impl_set_options", 00:10:38.699 "params": { 00:10:38.699 "impl_name": "ssl", 00:10:38.699 "recv_buf_size": 4096, 00:10:38.699 "send_buf_size": 4096, 00:10:38.699 "enable_recv_pipe": true, 00:10:38.699 "enable_quickack": false, 00:10:38.699 "enable_placement_id": 0, 00:10:38.699 "enable_zerocopy_send_server": true, 00:10:38.699 "enable_zerocopy_send_client": false, 00:10:38.699 "zerocopy_threshold": 0, 00:10:38.699 "tls_version": 0, 00:10:38.699 "enable_ktls": false 00:10:38.699 } 00:10:38.699 } 00:10:38.699 ] 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "subsystem": "vmd", 00:10:38.699 "config": [] 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "subsystem": "accel", 00:10:38.699 "config": [ 00:10:38.699 { 00:10:38.699 "method": "accel_set_options", 00:10:38.699 "params": { 00:10:38.699 "small_cache_size": 128, 00:10:38.699 "large_cache_size": 16, 00:10:38.699 "task_count": 2048, 00:10:38.699 "sequence_count": 2048, 00:10:38.699 "buf_count": 2048 00:10:38.699 } 00:10:38.699 } 00:10:38.699 ] 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "subsystem": "bdev", 00:10:38.699 "config": [ 00:10:38.699 { 00:10:38.699 "method": "bdev_set_options", 00:10:38.699 "params": { 00:10:38.699 "bdev_io_pool_size": 65535, 00:10:38.699 "bdev_io_cache_size": 256, 00:10:38.699 "bdev_auto_examine": true, 00:10:38.699 "iobuf_small_cache_size": 128, 00:10:38.699 "iobuf_large_cache_size": 16 00:10:38.699 } 00:10:38.699 }, 00:10:38.699 { 00:10:38.699 "method": "bdev_raid_set_options", 00:10:38.700 "params": { 00:10:38.700 "process_window_size_kb": 1024 00:10:38.700 } 00:10:38.700 }, 00:10:38.700 { 00:10:38.700 "method": "bdev_iscsi_set_options", 00:10:38.700 "params": { 00:10:38.700 "timeout_sec": 30 00:10:38.700 } 00:10:38.700 }, 00:10:38.700 { 00:10:38.700 "method": "bdev_nvme_set_options", 00:10:38.700 "params": { 00:10:38.700 "action_on_timeout": "none", 00:10:38.700 "timeout_us": 0, 00:10:38.700 "timeout_admin_us": 0, 00:10:38.700 "keep_alive_timeout_ms": 10000, 00:10:38.700 "transport_retry_count": 4, 00:10:38.700 "arbitration_burst": 0, 00:10:38.700 "low_priority_weight": 0, 00:10:38.700 "medium_priority_weight": 0, 00:10:38.700 "high_priority_weight": 0, 00:10:38.700 "nvme_adminq_poll_period_us": 10000, 00:10:38.700 "nvme_ioq_poll_period_us": 0, 00:10:38.700 "io_queue_requests": 512, 00:10:38.700 "delay_cmd_submit": true, 00:10:38.700 "bdev_retry_count": 3, 00:10:38.700 "transport_ack_timeout": 0, 00:10:38.700 "ctrlr_loss_timeout_sec": 0, 00:10:38.700 "reconnect_delay_sec": 0, 00:10:38.700 "fast_io_fail_timeout_sec": 0, 00:10:38.700 "generate_uuids": false, 00:10:38.700 "transport_tos": 0, 00:10:38.700 "io_path_stat": false, 00:10:38.700 "allow_accel_sequence": false 00:10:38.700 } 00:10:38.700 }, 00:10:38.700 { 00:10:38.700 "method": "bdev_nvme_attach_controller", 00:10:38.700 "params": { 00:10:38.700 "name": "TLSTEST", 00:10:38.700 "trtype": "TCP", 00:10:38.700 "adrfam": "IPv4", 00:10:38.700 "traddr": "10.0.0.2", 00:10:38.700 "trsvcid": "4420", 00:10:38.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.700 "prchk_reftag": false, 00:10:38.700 "prchk_guard": false, 00:10:38.700 "ctrlr_loss_timeout_sec": 0, 00:10:38.700 "reconnect_delay_sec": 0, 00:10:38.700 "fast_io_fail_timeout_sec": 0, 00:10:38.700 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:38.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.700 "hdgst": false, 00:10:38.700 "ddgst": false 00:10:38.700 } 00:10:38.700 }, 00:10:38.700 { 00:10:38.700 "method": "bdev_nvme_set_hotplug", 00:10:38.700 "params": { 00:10:38.700 "period_us": 100000, 00:10:38.700 "enable": false 00:10:38.700 } 00:10:38.700 }, 00:10:38.700 { 00:10:38.700 "method": "bdev_wait_for_examine" 00:10:38.700 } 00:10:38.700 ] 00:10:38.700 }, 00:10:38.700 { 00:10:38.700 "subsystem": "nbd", 00:10:38.700 "config": [] 00:10:38.700 } 00:10:38.700 ] 00:10:38.700 }' 00:10:38.700 15:09:07 -- common/autotest_common.sh@829 -- # '[' -z 65520 ']' 00:10:38.700 15:09:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:38.700 15:09:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.700 15:09:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:38.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:38.700 15:09:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.700 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:10:38.700 [2024-11-06 15:09:07.845133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:38.700 [2024-11-06 15:09:07.845436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65520 ] 00:10:38.959 [2024-11-06 15:09:07.984778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.959 [2024-11-06 15:09:08.036898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.959 [2024-11-06 15:09:08.158493] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:39.895 15:09:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.895 15:09:08 -- common/autotest_common.sh@862 -- # return 0 00:10:39.895 15:09:08 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:39.895 Running I/O for 10 seconds... 00:10:49.871 00:10:49.871 Latency(us) 00:10:49.871 [2024-11-06T15:09:19.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.871 [2024-11-06T15:09:19.146Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:49.871 Verification LBA range: start 0x0 length 0x2000 00:10:49.871 TLSTESTn1 : 10.01 5814.07 22.71 0.00 0.00 21981.66 3991.74 25976.09 00:10:49.871 [2024-11-06T15:09:19.146Z] =================================================================================================================== 00:10:49.871 [2024-11-06T15:09:19.146Z] Total : 5814.07 22.71 0.00 0.00 21981.66 3991.74 25976.09 00:10:49.871 0 00:10:49.871 15:09:18 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.871 15:09:18 -- target/tls.sh@223 -- # killprocess 65520 00:10:49.871 15:09:18 -- common/autotest_common.sh@936 -- # '[' -z 65520 ']' 00:10:49.871 15:09:18 -- common/autotest_common.sh@940 -- # kill -0 65520 00:10:49.871 15:09:18 -- common/autotest_common.sh@941 -- # uname 00:10:49.871 15:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.871 15:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65520 00:10:49.871 killing process with pid 65520 00:10:49.871 Received shutdown signal, test time was about 10.000000 seconds 00:10:49.871 00:10:49.871 Latency(us) 00:10:49.871 [2024-11-06T15:09:19.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.871 [2024-11-06T15:09:19.146Z] =================================================================================================================== 00:10:49.871 [2024-11-06T15:09:19.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:49.871 15:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:49.871 15:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:49.871 15:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65520' 00:10:49.871 15:09:18 -- common/autotest_common.sh@955 -- # kill 65520 00:10:49.871 15:09:18 -- common/autotest_common.sh@960 -- # wait 65520 00:10:50.130 15:09:19 -- target/tls.sh@224 -- # killprocess 65494 00:10:50.130 15:09:19 -- common/autotest_common.sh@936 -- # '[' -z 65494 ']' 00:10:50.130 15:09:19 -- common/autotest_common.sh@940 -- # kill -0 65494 00:10:50.130 15:09:19 -- common/autotest_common.sh@941 -- # uname 00:10:50.130 15:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.130 15:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65494 00:10:50.130 killing process with pid 65494 00:10:50.130 15:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:50.130 15:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:50.130 15:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65494' 00:10:50.130 15:09:19 -- common/autotest_common.sh@955 -- # kill 65494 00:10:50.130 15:09:19 -- common/autotest_common.sh@960 -- # wait 65494 00:10:50.130 15:09:19 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:10:50.130 15:09:19 -- target/tls.sh@227 -- # cleanup 00:10:50.130 15:09:19 -- target/tls.sh@15 -- # process_shm --id 0 00:10:50.130 15:09:19 -- common/autotest_common.sh@806 -- # type=--id 00:10:50.130 15:09:19 -- common/autotest_common.sh@807 -- # id=0 00:10:50.130 15:09:19 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:50.130 15:09:19 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:50.130 15:09:19 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:50.130 15:09:19 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:50.130 15:09:19 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:50.130 15:09:19 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:50.130 nvmf_trace.0 00:10:50.389 15:09:19 -- common/autotest_common.sh@821 -- # return 0 00:10:50.389 15:09:19 -- target/tls.sh@16 -- # killprocess 65520 00:10:50.389 15:09:19 -- common/autotest_common.sh@936 -- # '[' -z 65520 ']' 00:10:50.389 Process with pid 65520 is not found 00:10:50.389 15:09:19 -- common/autotest_common.sh@940 -- # kill -0 65520 00:10:50.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65520) - No such process 00:10:50.389 15:09:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65520 is not found' 00:10:50.389 15:09:19 -- target/tls.sh@17 -- # nvmftestfini 00:10:50.389 15:09:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:50.389 15:09:19 -- nvmf/common.sh@116 -- # sync 00:10:50.389 15:09:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:50.389 15:09:19 -- nvmf/common.sh@119 -- # set +e 00:10:50.389 15:09:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:50.389 15:09:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:50.389 rmmod nvme_tcp 00:10:50.389 rmmod nvme_fabrics 00:10:50.389 rmmod nvme_keyring 00:10:50.389 15:09:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:50.389 15:09:19 -- nvmf/common.sh@123 -- # set -e 00:10:50.389 15:09:19 -- nvmf/common.sh@124 -- # return 0 00:10:50.389 15:09:19 -- nvmf/common.sh@477 -- # '[' -n 65494 ']' 00:10:50.389 15:09:19 -- nvmf/common.sh@478 -- # killprocess 65494 00:10:50.389 15:09:19 -- common/autotest_common.sh@936 -- # '[' -z 65494 ']' 00:10:50.389 15:09:19 -- common/autotest_common.sh@940 -- # kill -0 65494 00:10:50.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65494) - No such process 00:10:50.389 15:09:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65494 is not found' 00:10:50.389 Process with pid 65494 is not found 00:10:50.389 15:09:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:50.389 15:09:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:50.389 15:09:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:50.389 15:09:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.389 15:09:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:50.389 15:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.389 15:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.389 15:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.389 15:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:50.389 15:09:19 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:50.389 00:10:50.389 real 1m11.027s 00:10:50.389 user 1m50.969s 00:10:50.389 sys 0m23.654s 00:10:50.389 15:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.389 15:09:19 -- common/autotest_common.sh@10 -- # set +x 00:10:50.389 ************************************ 00:10:50.389 END TEST nvmf_tls 00:10:50.389 ************************************ 00:10:50.389 15:09:19 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:10:50.389 15:09:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:50.389 15:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.389 15:09:19 -- common/autotest_common.sh@10 -- # set +x 00:10:50.389 ************************************ 00:10:50.389 START TEST nvmf_fips 00:10:50.389 ************************************ 00:10:50.389 15:09:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:10:50.649 * Looking for test storage... 00:10:50.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:10:50.649 15:09:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:50.649 15:09:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:50.649 15:09:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:50.649 15:09:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:50.649 15:09:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:50.649 15:09:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:50.649 15:09:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:50.649 15:09:19 -- scripts/common.sh@335 -- # IFS=.-: 00:10:50.649 15:09:19 -- scripts/common.sh@335 -- # read -ra ver1 00:10:50.649 15:09:19 -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.649 15:09:19 -- scripts/common.sh@336 -- # read -ra ver2 00:10:50.649 15:09:19 -- scripts/common.sh@337 -- # local 'op=<' 00:10:50.649 15:09:19 -- scripts/common.sh@339 -- # ver1_l=2 00:10:50.649 15:09:19 -- scripts/common.sh@340 -- # ver2_l=1 00:10:50.649 15:09:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:50.649 15:09:19 -- scripts/common.sh@343 -- # case "$op" in 00:10:50.649 15:09:19 -- scripts/common.sh@344 -- # : 1 00:10:50.649 15:09:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:50.649 15:09:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.649 15:09:19 -- scripts/common.sh@364 -- # decimal 1 00:10:50.649 15:09:19 -- scripts/common.sh@352 -- # local d=1 00:10:50.649 15:09:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.649 15:09:19 -- scripts/common.sh@354 -- # echo 1 00:10:50.649 15:09:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:50.649 15:09:19 -- scripts/common.sh@365 -- # decimal 2 00:10:50.649 15:09:19 -- scripts/common.sh@352 -- # local d=2 00:10:50.649 15:09:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.649 15:09:19 -- scripts/common.sh@354 -- # echo 2 00:10:50.649 15:09:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:50.649 15:09:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:50.649 15:09:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:50.649 15:09:19 -- scripts/common.sh@367 -- # return 0 00:10:50.649 15:09:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.649 15:09:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.649 --rc genhtml_branch_coverage=1 00:10:50.649 --rc genhtml_function_coverage=1 00:10:50.649 --rc genhtml_legend=1 00:10:50.649 --rc geninfo_all_blocks=1 00:10:50.649 --rc geninfo_unexecuted_blocks=1 00:10:50.649 00:10:50.649 ' 00:10:50.649 15:09:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.649 --rc genhtml_branch_coverage=1 00:10:50.649 --rc genhtml_function_coverage=1 00:10:50.649 --rc genhtml_legend=1 00:10:50.649 --rc geninfo_all_blocks=1 00:10:50.649 --rc geninfo_unexecuted_blocks=1 00:10:50.649 00:10:50.649 ' 00:10:50.649 15:09:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.649 --rc genhtml_branch_coverage=1 00:10:50.649 --rc genhtml_function_coverage=1 00:10:50.649 --rc genhtml_legend=1 00:10:50.649 --rc geninfo_all_blocks=1 00:10:50.649 --rc geninfo_unexecuted_blocks=1 00:10:50.649 00:10:50.649 ' 00:10:50.649 15:09:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.649 --rc genhtml_branch_coverage=1 00:10:50.649 --rc genhtml_function_coverage=1 00:10:50.649 --rc genhtml_legend=1 00:10:50.649 --rc geninfo_all_blocks=1 00:10:50.649 --rc geninfo_unexecuted_blocks=1 00:10:50.649 00:10:50.649 ' 00:10:50.649 15:09:19 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.649 15:09:19 -- nvmf/common.sh@7 -- # uname -s 00:10:50.649 15:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.649 15:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.649 15:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.649 15:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.649 15:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.649 15:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.649 15:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.649 15:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.649 15:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.649 15:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.649 15:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:10:50.649 15:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:10:50.649 15:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.649 15:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.649 15:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.649 15:09:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.649 15:09:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.649 15:09:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.649 15:09:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.649 15:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.649 15:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.649 15:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.649 15:09:19 -- paths/export.sh@5 -- # export PATH 00:10:50.649 15:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.649 15:09:19 -- nvmf/common.sh@46 -- # : 0 00:10:50.649 15:09:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:50.649 15:09:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:50.649 15:09:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:50.649 15:09:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.649 15:09:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.649 15:09:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:50.649 15:09:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:50.649 15:09:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:50.649 15:09:19 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.649 15:09:19 -- fips/fips.sh@89 -- # check_openssl_version 00:10:50.649 15:09:19 -- fips/fips.sh@83 -- # local target=3.0.0 00:10:50.649 15:09:19 -- fips/fips.sh@85 -- # openssl version 00:10:50.649 15:09:19 -- fips/fips.sh@85 -- # awk '{print $2}' 00:10:50.649 15:09:19 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:10:50.649 15:09:19 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:10:50.649 15:09:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:50.649 15:09:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:50.649 15:09:19 -- scripts/common.sh@335 -- # IFS=.-: 00:10:50.649 15:09:19 -- scripts/common.sh@335 -- # read -ra ver1 00:10:50.649 15:09:19 -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.649 15:09:19 -- scripts/common.sh@336 -- # read -ra ver2 00:10:50.649 15:09:19 -- scripts/common.sh@337 -- # local 'op=>=' 00:10:50.649 15:09:19 -- scripts/common.sh@339 -- # ver1_l=3 00:10:50.649 15:09:19 -- scripts/common.sh@340 -- # ver2_l=3 00:10:50.649 15:09:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:50.649 15:09:19 -- scripts/common.sh@343 -- # case "$op" in 00:10:50.649 15:09:19 -- scripts/common.sh@347 -- # : 1 00:10:50.649 15:09:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:50.650 15:09:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.650 15:09:19 -- scripts/common.sh@364 -- # decimal 3 00:10:50.650 15:09:19 -- scripts/common.sh@352 -- # local d=3 00:10:50.650 15:09:19 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:10:50.650 15:09:19 -- scripts/common.sh@354 -- # echo 3 00:10:50.650 15:09:19 -- scripts/common.sh@364 -- # ver1[v]=3 00:10:50.650 15:09:19 -- scripts/common.sh@365 -- # decimal 3 00:10:50.650 15:09:19 -- scripts/common.sh@352 -- # local d=3 00:10:50.650 15:09:19 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:10:50.650 15:09:19 -- scripts/common.sh@354 -- # echo 3 00:10:50.650 15:09:19 -- scripts/common.sh@365 -- # ver2[v]=3 00:10:50.650 15:09:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:50.650 15:09:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:50.650 15:09:19 -- scripts/common.sh@363 -- # (( v++ )) 00:10:50.650 15:09:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.650 15:09:19 -- scripts/common.sh@364 -- # decimal 1 00:10:50.650 15:09:19 -- scripts/common.sh@352 -- # local d=1 00:10:50.650 15:09:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.650 15:09:19 -- scripts/common.sh@354 -- # echo 1 00:10:50.650 15:09:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:50.650 15:09:19 -- scripts/common.sh@365 -- # decimal 0 00:10:50.650 15:09:19 -- scripts/common.sh@352 -- # local d=0 00:10:50.650 15:09:19 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:10:50.650 15:09:19 -- scripts/common.sh@354 -- # echo 0 00:10:50.650 15:09:19 -- scripts/common.sh@365 -- # ver2[v]=0 00:10:50.650 15:09:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:50.650 15:09:19 -- scripts/common.sh@366 -- # return 0 00:10:50.650 15:09:19 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:10:50.650 15:09:19 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:10:50.650 15:09:19 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:10:50.650 15:09:19 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:10:50.650 15:09:19 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:10:50.650 15:09:19 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:10:50.650 15:09:19 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:10:50.650 15:09:19 -- fips/fips.sh@113 -- # build_openssl_config 00:10:50.650 15:09:19 -- fips/fips.sh@37 -- # cat 00:10:50.650 15:09:19 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:10:50.650 15:09:19 -- fips/fips.sh@58 -- # cat - 00:10:50.650 15:09:19 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:10:50.650 15:09:19 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:10:50.650 15:09:19 -- fips/fips.sh@116 -- # mapfile -t providers 00:10:50.650 15:09:19 -- fips/fips.sh@116 -- # openssl list -providers 00:10:50.650 15:09:19 -- fips/fips.sh@116 -- # grep name 00:10:50.910 15:09:19 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:10:50.910 15:09:19 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:10:50.910 15:09:19 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:10:50.910 15:09:19 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:10:50.910 15:09:19 -- fips/fips.sh@127 -- # : 00:10:50.910 15:09:19 -- common/autotest_common.sh@650 -- # local es=0 00:10:50.910 15:09:19 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:10:50.910 15:09:19 -- common/autotest_common.sh@638 -- # local arg=openssl 00:10:50.910 15:09:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.910 15:09:19 -- common/autotest_common.sh@642 -- # type -t openssl 00:10:50.910 15:09:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.910 15:09:19 -- common/autotest_common.sh@644 -- # type -P openssl 00:10:50.910 15:09:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.910 15:09:19 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:10:50.910 15:09:19 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:10:50.910 15:09:19 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:10:50.910 Error setting digest 00:10:50.910 40E2EEA0007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:10:50.910 40E2EEA0007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:10:50.910 15:09:19 -- common/autotest_common.sh@653 -- # es=1 00:10:50.910 15:09:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:50.910 15:09:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:50.910 15:09:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:50.911 15:09:19 -- fips/fips.sh@130 -- # nvmftestinit 00:10:50.911 15:09:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:50.911 15:09:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.911 15:09:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:50.911 15:09:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:50.911 15:09:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:50.911 15:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.911 15:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.911 15:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.911 15:09:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:50.911 15:09:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:50.911 15:09:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:50.911 15:09:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:50.911 15:09:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:50.911 15:09:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:50.911 15:09:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.911 15:09:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.911 15:09:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:50.911 15:09:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:50.911 15:09:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.911 15:09:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.911 15:09:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.911 15:09:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.911 15:09:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.911 15:09:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.911 15:09:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.911 15:09:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.911 15:09:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:50.911 15:09:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:50.911 Cannot find device "nvmf_tgt_br" 00:10:50.911 15:09:20 -- nvmf/common.sh@154 -- # true 00:10:50.911 15:09:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.911 Cannot find device "nvmf_tgt_br2" 00:10:50.911 15:09:20 -- nvmf/common.sh@155 -- # true 00:10:50.911 15:09:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:50.911 15:09:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:50.911 Cannot find device "nvmf_tgt_br" 00:10:50.911 15:09:20 -- nvmf/common.sh@157 -- # true 00:10:50.911 15:09:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:50.911 Cannot find device "nvmf_tgt_br2" 00:10:50.911 15:09:20 -- nvmf/common.sh@158 -- # true 00:10:50.911 15:09:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:50.911 15:09:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:50.911 15:09:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.911 15:09:20 -- nvmf/common.sh@161 -- # true 00:10:50.911 15:09:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.911 15:09:20 -- nvmf/common.sh@162 -- # true 00:10:50.911 15:09:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.911 15:09:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.911 15:09:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.911 15:09:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.911 15:09:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.911 15:09:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.170 15:09:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.170 15:09:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:51.170 15:09:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:51.170 15:09:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:51.170 15:09:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:51.170 15:09:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:51.170 15:09:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:51.170 15:09:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.170 15:09:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.170 15:09:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.170 15:09:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:51.170 15:09:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:51.170 15:09:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:51.170 15:09:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:51.170 15:09:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:51.170 15:09:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:51.170 15:09:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:51.170 15:09:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:51.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:10:51.170 00:10:51.170 --- 10.0.0.2 ping statistics --- 00:10:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.170 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:51.170 15:09:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:51.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:51.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:51.170 00:10:51.170 --- 10.0.0.3 ping statistics --- 00:10:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.170 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:51.170 15:09:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:51.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:51.170 00:10:51.170 --- 10.0.0.1 ping statistics --- 00:10:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.170 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:51.170 15:09:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.170 15:09:20 -- nvmf/common.sh@421 -- # return 0 00:10:51.170 15:09:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:51.170 15:09:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.170 15:09:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:51.170 15:09:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:51.170 15:09:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.170 15:09:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:51.170 15:09:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:51.170 15:09:20 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:10:51.170 15:09:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:51.170 15:09:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.170 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:10:51.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.170 15:09:20 -- nvmf/common.sh@469 -- # nvmfpid=65879 00:10:51.170 15:09:20 -- nvmf/common.sh@470 -- # waitforlisten 65879 00:10:51.170 15:09:20 -- common/autotest_common.sh@829 -- # '[' -z 65879 ']' 00:10:51.170 15:09:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.170 15:09:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.170 15:09:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.170 15:09:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.170 15:09:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.170 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:10:51.430 [2024-11-06 15:09:20.450616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.430 [2024-11-06 15:09:20.450763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.430 [2024-11-06 15:09:20.591914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.430 [2024-11-06 15:09:20.658597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.430 [2024-11-06 15:09:20.659111] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.430 [2024-11-06 15:09:20.659143] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.430 [2024-11-06 15:09:20.659155] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.430 [2024-11-06 15:09:20.659199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.367 15:09:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.367 15:09:21 -- common/autotest_common.sh@862 -- # return 0 00:10:52.367 15:09:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:52.367 15:09:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.367 15:09:21 -- common/autotest_common.sh@10 -- # set +x 00:10:52.367 15:09:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.367 15:09:21 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:10:52.367 15:09:21 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:10:52.367 15:09:21 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:52.367 15:09:21 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:10:52.367 15:09:21 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:52.367 15:09:21 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:52.367 15:09:21 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:52.367 15:09:21 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.626 [2024-11-06 15:09:21.757820] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.626 [2024-11-06 15:09:21.773779] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:52.626 [2024-11-06 15:09:21.773971] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.626 malloc0 00:10:52.626 15:09:21 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:52.626 15:09:21 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:52.626 15:09:21 -- fips/fips.sh@147 -- # bdevperf_pid=65913 00:10:52.626 15:09:21 -- fips/fips.sh@148 -- # waitforlisten 65913 /var/tmp/bdevperf.sock 00:10:52.626 15:09:21 -- common/autotest_common.sh@829 -- # '[' -z 65913 ']' 00:10:52.626 15:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:52.626 15:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.626 15:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:52.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:52.626 15:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.626 15:09:21 -- common/autotest_common.sh@10 -- # set +x 00:10:52.626 [2024-11-06 15:09:21.887804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:52.626 [2024-11-06 15:09:21.887886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65913 ] 00:10:52.884 [2024-11-06 15:09:22.024459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.884 [2024-11-06 15:09:22.078594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.820 15:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.820 15:09:22 -- common/autotest_common.sh@862 -- # return 0 00:10:53.820 15:09:22 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:53.820 [2024-11-06 15:09:23.036470] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:54.079 TLSTESTn1 00:10:54.079 15:09:23 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:54.079 Running I/O for 10 seconds... 00:11:04.057 00:11:04.057 Latency(us) 00:11:04.057 [2024-11-06T15:09:33.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.057 [2024-11-06T15:09:33.332Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:04.057 Verification LBA range: start 0x0 length 0x2000 00:11:04.057 TLSTESTn1 : 10.01 6212.13 24.27 0.00 0.00 20572.52 4379.00 33602.09 00:11:04.057 [2024-11-06T15:09:33.332Z] =================================================================================================================== 00:11:04.057 [2024-11-06T15:09:33.332Z] Total : 6212.13 24.27 0.00 0.00 20572.52 4379.00 33602.09 00:11:04.057 0 00:11:04.057 15:09:33 -- fips/fips.sh@1 -- # cleanup 00:11:04.057 15:09:33 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:04.057 15:09:33 -- common/autotest_common.sh@806 -- # type=--id 00:11:04.057 15:09:33 -- common/autotest_common.sh@807 -- # id=0 00:11:04.057 15:09:33 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:04.057 15:09:33 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:04.057 15:09:33 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:04.057 15:09:33 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:04.057 15:09:33 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:04.057 15:09:33 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:04.057 nvmf_trace.0 00:11:04.057 15:09:33 -- common/autotest_common.sh@821 -- # return 0 00:11:04.057 15:09:33 -- fips/fips.sh@16 -- # killprocess 65913 00:11:04.057 15:09:33 -- common/autotest_common.sh@936 -- # '[' -z 65913 ']' 00:11:04.057 15:09:33 -- common/autotest_common.sh@940 -- # kill -0 65913 00:11:04.058 15:09:33 -- common/autotest_common.sh@941 -- # uname 00:11:04.058 15:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.058 15:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65913 00:11:04.317 killing process with pid 65913 00:11:04.317 Received shutdown signal, test time was about 10.000000 seconds 00:11:04.317 00:11:04.317 Latency(us) 00:11:04.317 [2024-11-06T15:09:33.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.317 [2024-11-06T15:09:33.592Z] =================================================================================================================== 00:11:04.317 [2024-11-06T15:09:33.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:04.317 15:09:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:04.317 15:09:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:04.317 15:09:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65913' 00:11:04.317 15:09:33 -- common/autotest_common.sh@955 -- # kill 65913 00:11:04.317 15:09:33 -- common/autotest_common.sh@960 -- # wait 65913 00:11:04.317 15:09:33 -- fips/fips.sh@17 -- # nvmftestfini 00:11:04.317 15:09:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:04.317 15:09:33 -- nvmf/common.sh@116 -- # sync 00:11:04.317 15:09:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:04.317 15:09:33 -- nvmf/common.sh@119 -- # set +e 00:11:04.317 15:09:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:04.317 15:09:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:04.317 rmmod nvme_tcp 00:11:04.317 rmmod nvme_fabrics 00:11:04.576 rmmod nvme_keyring 00:11:04.576 15:09:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:04.576 15:09:33 -- nvmf/common.sh@123 -- # set -e 00:11:04.576 15:09:33 -- nvmf/common.sh@124 -- # return 0 00:11:04.576 15:09:33 -- nvmf/common.sh@477 -- # '[' -n 65879 ']' 00:11:04.576 15:09:33 -- nvmf/common.sh@478 -- # killprocess 65879 00:11:04.576 15:09:33 -- common/autotest_common.sh@936 -- # '[' -z 65879 ']' 00:11:04.576 15:09:33 -- common/autotest_common.sh@940 -- # kill -0 65879 00:11:04.576 15:09:33 -- common/autotest_common.sh@941 -- # uname 00:11:04.576 15:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.576 15:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65879 00:11:04.576 killing process with pid 65879 00:11:04.576 15:09:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:04.576 15:09:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:04.576 15:09:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65879' 00:11:04.576 15:09:33 -- common/autotest_common.sh@955 -- # kill 65879 00:11:04.576 15:09:33 -- common/autotest_common.sh@960 -- # wait 65879 00:11:04.576 15:09:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:04.576 15:09:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:04.576 15:09:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:04.576 15:09:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.576 15:09:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:04.576 15:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.576 15:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.576 15:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.835 15:09:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:04.835 15:09:33 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:04.835 ************************************ 00:11:04.835 END TEST nvmf_fips 00:11:04.835 ************************************ 00:11:04.835 00:11:04.835 real 0m14.268s 00:11:04.835 user 0m19.253s 00:11:04.835 sys 0m5.758s 00:11:04.835 15:09:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.835 15:09:33 -- common/autotest_common.sh@10 -- # set +x 00:11:04.835 15:09:33 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:04.835 15:09:33 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:04.835 15:09:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:04.835 15:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.835 15:09:33 -- common/autotest_common.sh@10 -- # set +x 00:11:04.835 ************************************ 00:11:04.835 START TEST nvmf_fuzz 00:11:04.835 ************************************ 00:11:04.835 15:09:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:04.835 * Looking for test storage... 00:11:04.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.835 15:09:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:04.835 15:09:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:04.835 15:09:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.095 15:09:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.095 15:09:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.095 15:09:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.095 15:09:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.095 15:09:34 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.095 15:09:34 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.095 15:09:34 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.095 15:09:34 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.095 15:09:34 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.095 15:09:34 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.095 15:09:34 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.095 15:09:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.095 15:09:34 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.095 15:09:34 -- scripts/common.sh@344 -- # : 1 00:11:05.095 15:09:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.095 15:09:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.095 15:09:34 -- scripts/common.sh@364 -- # decimal 1 00:11:05.095 15:09:34 -- scripts/common.sh@352 -- # local d=1 00:11:05.095 15:09:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.095 15:09:34 -- scripts/common.sh@354 -- # echo 1 00:11:05.095 15:09:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.095 15:09:34 -- scripts/common.sh@365 -- # decimal 2 00:11:05.095 15:09:34 -- scripts/common.sh@352 -- # local d=2 00:11:05.095 15:09:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.095 15:09:34 -- scripts/common.sh@354 -- # echo 2 00:11:05.095 15:09:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.095 15:09:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.095 15:09:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.095 15:09:34 -- scripts/common.sh@367 -- # return 0 00:11:05.095 15:09:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.095 15:09:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.095 --rc genhtml_branch_coverage=1 00:11:05.095 --rc genhtml_function_coverage=1 00:11:05.095 --rc genhtml_legend=1 00:11:05.095 --rc geninfo_all_blocks=1 00:11:05.095 --rc geninfo_unexecuted_blocks=1 00:11:05.095 00:11:05.095 ' 00:11:05.095 15:09:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.095 --rc genhtml_branch_coverage=1 00:11:05.095 --rc genhtml_function_coverage=1 00:11:05.095 --rc genhtml_legend=1 00:11:05.095 --rc geninfo_all_blocks=1 00:11:05.095 --rc geninfo_unexecuted_blocks=1 00:11:05.095 00:11:05.095 ' 00:11:05.095 15:09:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.095 --rc genhtml_branch_coverage=1 00:11:05.095 --rc genhtml_function_coverage=1 00:11:05.095 --rc genhtml_legend=1 00:11:05.095 --rc geninfo_all_blocks=1 00:11:05.095 --rc geninfo_unexecuted_blocks=1 00:11:05.095 00:11:05.095 ' 00:11:05.095 15:09:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.095 --rc genhtml_branch_coverage=1 00:11:05.095 --rc genhtml_function_coverage=1 00:11:05.095 --rc genhtml_legend=1 00:11:05.095 --rc geninfo_all_blocks=1 00:11:05.095 --rc geninfo_unexecuted_blocks=1 00:11:05.095 00:11:05.095 ' 00:11:05.095 15:09:34 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:05.095 15:09:34 -- nvmf/common.sh@7 -- # uname -s 00:11:05.095 15:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.095 15:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.095 15:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.095 15:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.095 15:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.095 15:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.095 15:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.095 15:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.095 15:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.095 15:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.095 15:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:11:05.095 15:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:11:05.095 15:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.095 15:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.095 15:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:05.095 15:09:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.095 15:09:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.095 15:09:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.095 15:09:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.095 15:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.095 15:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.095 15:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.095 15:09:34 -- paths/export.sh@5 -- # export PATH 00:11:05.095 15:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.095 15:09:34 -- nvmf/common.sh@46 -- # : 0 00:11:05.095 15:09:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:05.095 15:09:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:05.095 15:09:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:05.095 15:09:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.095 15:09:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.095 15:09:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:05.095 15:09:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:05.095 15:09:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:05.095 15:09:34 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:05.095 15:09:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:05.095 15:09:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.095 15:09:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:05.095 15:09:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:05.095 15:09:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:05.095 15:09:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.095 15:09:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.095 15:09:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.095 15:09:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:05.095 15:09:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:05.095 15:09:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:05.095 15:09:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:05.095 15:09:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:05.095 15:09:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:05.095 15:09:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.095 15:09:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.095 15:09:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:05.095 15:09:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:05.095 15:09:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:05.095 15:09:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:05.095 15:09:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:05.095 15:09:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.095 15:09:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:05.096 15:09:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:05.096 15:09:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:05.096 15:09:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:05.096 15:09:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:05.096 15:09:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:05.096 Cannot find device "nvmf_tgt_br" 00:11:05.096 15:09:34 -- nvmf/common.sh@154 -- # true 00:11:05.096 15:09:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.096 Cannot find device "nvmf_tgt_br2" 00:11:05.096 15:09:34 -- nvmf/common.sh@155 -- # true 00:11:05.096 15:09:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:05.096 15:09:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:05.096 Cannot find device "nvmf_tgt_br" 00:11:05.096 15:09:34 -- nvmf/common.sh@157 -- # true 00:11:05.096 15:09:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:05.096 Cannot find device "nvmf_tgt_br2" 00:11:05.096 15:09:34 -- nvmf/common.sh@158 -- # true 00:11:05.096 15:09:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:05.096 15:09:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:05.096 15:09:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.096 15:09:34 -- nvmf/common.sh@161 -- # true 00:11:05.096 15:09:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.096 15:09:34 -- nvmf/common.sh@162 -- # true 00:11:05.096 15:09:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:05.096 15:09:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:05.096 15:09:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:05.096 15:09:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:05.096 15:09:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:05.096 15:09:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:05.096 15:09:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:05.096 15:09:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:05.096 15:09:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.096 15:09:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:05.096 15:09:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:05.096 15:09:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:05.096 15:09:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:05.096 15:09:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.355 15:09:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.355 15:09:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.355 15:09:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:05.355 15:09:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:05.355 15:09:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.355 15:09:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.355 15:09:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.355 15:09:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.355 15:09:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.355 15:09:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:05.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:11:05.355 00:11:05.355 --- 10.0.0.2 ping statistics --- 00:11:05.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.355 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:05.355 15:09:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:05.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:11:05.355 00:11:05.355 --- 10.0.0.3 ping statistics --- 00:11:05.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.355 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:05.355 15:09:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:11:05.355 00:11:05.355 --- 10.0.0.1 ping statistics --- 00:11:05.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.355 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:05.355 15:09:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.355 15:09:34 -- nvmf/common.sh@421 -- # return 0 00:11:05.355 15:09:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:05.355 15:09:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.355 15:09:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:05.355 15:09:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:05.355 15:09:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.355 15:09:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:05.355 15:09:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:05.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.355 15:09:34 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66254 00:11:05.355 15:09:34 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:05.355 15:09:34 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:05.355 15:09:34 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66254 00:11:05.355 15:09:34 -- common/autotest_common.sh@829 -- # '[' -z 66254 ']' 00:11:05.355 15:09:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.355 15:09:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.355 15:09:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.355 15:09:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.355 15:09:34 -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 15:09:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.733 15:09:35 -- common/autotest_common.sh@862 -- # return 0 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.733 15:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.733 15:09:35 -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 15:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:06.733 15:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.733 15:09:35 -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 Malloc0 00:11:06.733 15:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.733 15:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.733 15:09:35 -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 15:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.733 15:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.733 15:09:35 -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 15:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.733 15:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.733 15:09:35 -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 15:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:06.733 Shutting down the fuzz application 00:11:06.733 15:09:35 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:07.301 Shutting down the fuzz application 00:11:07.301 15:09:36 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.301 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.301 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:11:07.301 15:09:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.301 15:09:36 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:07.301 15:09:36 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:07.301 15:09:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:07.301 15:09:36 -- nvmf/common.sh@116 -- # sync 00:11:07.301 15:09:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:07.301 15:09:36 -- nvmf/common.sh@119 -- # set +e 00:11:07.301 15:09:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:07.301 15:09:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:07.301 rmmod nvme_tcp 00:11:07.301 rmmod nvme_fabrics 00:11:07.301 rmmod nvme_keyring 00:11:07.301 15:09:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:07.301 15:09:36 -- nvmf/common.sh@123 -- # set -e 00:11:07.301 15:09:36 -- nvmf/common.sh@124 -- # return 0 00:11:07.301 15:09:36 -- nvmf/common.sh@477 -- # '[' -n 66254 ']' 00:11:07.301 15:09:36 -- nvmf/common.sh@478 -- # killprocess 66254 00:11:07.301 15:09:36 -- common/autotest_common.sh@936 -- # '[' -z 66254 ']' 00:11:07.301 15:09:36 -- common/autotest_common.sh@940 -- # kill -0 66254 00:11:07.301 15:09:36 -- common/autotest_common.sh@941 -- # uname 00:11:07.301 15:09:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.301 15:09:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66254 00:11:07.301 killing process with pid 66254 00:11:07.301 15:09:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:07.301 15:09:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:07.301 15:09:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66254' 00:11:07.301 15:09:36 -- common/autotest_common.sh@955 -- # kill 66254 00:11:07.301 15:09:36 -- common/autotest_common.sh@960 -- # wait 66254 00:11:07.560 15:09:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:07.560 15:09:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:07.560 15:09:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:07.560 15:09:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.560 15:09:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:07.560 15:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.560 15:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.560 15:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.560 15:09:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:07.560 15:09:36 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:07.560 ************************************ 00:11:07.560 END TEST nvmf_fuzz 00:11:07.560 ************************************ 00:11:07.560 00:11:07.560 real 0m2.778s 00:11:07.560 user 0m3.032s 00:11:07.560 sys 0m0.592s 00:11:07.560 15:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.560 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:11:07.560 15:09:36 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:07.560 15:09:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:07.560 15:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.560 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:11:07.560 ************************************ 00:11:07.560 START TEST nvmf_multiconnection 00:11:07.560 ************************************ 00:11:07.560 15:09:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:07.820 * Looking for test storage... 00:11:07.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.820 15:09:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:07.820 15:09:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:07.820 15:09:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:07.820 15:09:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:07.820 15:09:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:07.820 15:09:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:07.820 15:09:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:07.820 15:09:36 -- scripts/common.sh@335 -- # IFS=.-: 00:11:07.820 15:09:36 -- scripts/common.sh@335 -- # read -ra ver1 00:11:07.820 15:09:36 -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.820 15:09:36 -- scripts/common.sh@336 -- # read -ra ver2 00:11:07.820 15:09:36 -- scripts/common.sh@337 -- # local 'op=<' 00:11:07.820 15:09:36 -- scripts/common.sh@339 -- # ver1_l=2 00:11:07.820 15:09:36 -- scripts/common.sh@340 -- # ver2_l=1 00:11:07.820 15:09:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:07.820 15:09:36 -- scripts/common.sh@343 -- # case "$op" in 00:11:07.820 15:09:36 -- scripts/common.sh@344 -- # : 1 00:11:07.820 15:09:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:07.820 15:09:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.820 15:09:36 -- scripts/common.sh@364 -- # decimal 1 00:11:07.820 15:09:36 -- scripts/common.sh@352 -- # local d=1 00:11:07.820 15:09:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.820 15:09:36 -- scripts/common.sh@354 -- # echo 1 00:11:07.820 15:09:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:07.820 15:09:36 -- scripts/common.sh@365 -- # decimal 2 00:11:07.820 15:09:36 -- scripts/common.sh@352 -- # local d=2 00:11:07.820 15:09:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.820 15:09:36 -- scripts/common.sh@354 -- # echo 2 00:11:07.820 15:09:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:07.820 15:09:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:07.820 15:09:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:07.820 15:09:36 -- scripts/common.sh@367 -- # return 0 00:11:07.820 15:09:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.820 15:09:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:07.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.820 --rc genhtml_branch_coverage=1 00:11:07.820 --rc genhtml_function_coverage=1 00:11:07.820 --rc genhtml_legend=1 00:11:07.820 --rc geninfo_all_blocks=1 00:11:07.820 --rc geninfo_unexecuted_blocks=1 00:11:07.820 00:11:07.820 ' 00:11:07.820 15:09:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:07.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.820 --rc genhtml_branch_coverage=1 00:11:07.820 --rc genhtml_function_coverage=1 00:11:07.820 --rc genhtml_legend=1 00:11:07.820 --rc geninfo_all_blocks=1 00:11:07.820 --rc geninfo_unexecuted_blocks=1 00:11:07.820 00:11:07.821 ' 00:11:07.821 15:09:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:07.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.821 --rc genhtml_branch_coverage=1 00:11:07.821 --rc genhtml_function_coverage=1 00:11:07.821 --rc genhtml_legend=1 00:11:07.821 --rc geninfo_all_blocks=1 00:11:07.821 --rc geninfo_unexecuted_blocks=1 00:11:07.821 00:11:07.821 ' 00:11:07.821 15:09:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:07.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.821 --rc genhtml_branch_coverage=1 00:11:07.821 --rc genhtml_function_coverage=1 00:11:07.821 --rc genhtml_legend=1 00:11:07.821 --rc geninfo_all_blocks=1 00:11:07.821 --rc geninfo_unexecuted_blocks=1 00:11:07.821 00:11:07.821 ' 00:11:07.821 15:09:36 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.821 15:09:36 -- nvmf/common.sh@7 -- # uname -s 00:11:07.821 15:09:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.821 15:09:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.821 15:09:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.821 15:09:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.821 15:09:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.821 15:09:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.821 15:09:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.821 15:09:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.821 15:09:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.821 15:09:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.821 15:09:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:11:07.821 15:09:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:11:07.821 15:09:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.821 15:09:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.821 15:09:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.821 15:09:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.821 15:09:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.821 15:09:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.821 15:09:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.821 15:09:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.821 15:09:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.821 15:09:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.821 15:09:36 -- paths/export.sh@5 -- # export PATH 00:11:07.821 15:09:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.821 15:09:36 -- nvmf/common.sh@46 -- # : 0 00:11:07.821 15:09:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:07.821 15:09:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:07.821 15:09:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:07.821 15:09:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.821 15:09:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.821 15:09:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:07.821 15:09:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:07.821 15:09:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:07.821 15:09:36 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.821 15:09:36 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.821 15:09:36 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:07.821 15:09:36 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:07.821 15:09:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:07.821 15:09:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.821 15:09:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:07.821 15:09:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:07.821 15:09:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:07.821 15:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.821 15:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.821 15:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.821 15:09:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:07.821 15:09:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:07.821 15:09:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:07.821 15:09:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:07.821 15:09:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:07.821 15:09:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:07.821 15:09:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.821 15:09:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.821 15:09:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:07.821 15:09:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:07.821 15:09:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.821 15:09:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.821 15:09:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.821 15:09:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.821 15:09:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.821 15:09:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.821 15:09:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.821 15:09:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.821 15:09:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:07.821 15:09:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:07.821 Cannot find device "nvmf_tgt_br" 00:11:07.821 15:09:37 -- nvmf/common.sh@154 -- # true 00:11:07.821 15:09:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.821 Cannot find device "nvmf_tgt_br2" 00:11:07.821 15:09:37 -- nvmf/common.sh@155 -- # true 00:11:07.821 15:09:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:07.821 15:09:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:07.821 Cannot find device "nvmf_tgt_br" 00:11:07.821 15:09:37 -- nvmf/common.sh@157 -- # true 00:11:07.821 15:09:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:07.821 Cannot find device "nvmf_tgt_br2" 00:11:07.821 15:09:37 -- nvmf/common.sh@158 -- # true 00:11:07.821 15:09:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:08.089 15:09:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:08.089 15:09:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.089 15:09:37 -- nvmf/common.sh@161 -- # true 00:11:08.089 15:09:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.089 15:09:37 -- nvmf/common.sh@162 -- # true 00:11:08.089 15:09:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.089 15:09:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.089 15:09:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.089 15:09:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.089 15:09:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.089 15:09:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.089 15:09:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.089 15:09:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:08.089 15:09:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:08.089 15:09:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:08.089 15:09:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:08.089 15:09:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:08.090 15:09:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:08.090 15:09:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.090 15:09:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.090 15:09:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.090 15:09:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:08.090 15:09:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:08.090 15:09:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.090 15:09:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.090 15:09:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.090 15:09:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.090 15:09:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.090 15:09:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:08.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:08.090 00:11:08.090 --- 10.0.0.2 ping statistics --- 00:11:08.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.090 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:08.090 15:09:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:08.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:08.090 00:11:08.090 --- 10.0.0.3 ping statistics --- 00:11:08.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.090 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:08.090 15:09:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:08.090 00:11:08.090 --- 10.0.0.1 ping statistics --- 00:11:08.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.090 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:08.090 15:09:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.090 15:09:37 -- nvmf/common.sh@421 -- # return 0 00:11:08.090 15:09:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:08.090 15:09:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.090 15:09:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:08.090 15:09:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:08.090 15:09:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.090 15:09:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:08.090 15:09:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:08.423 15:09:37 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:08.423 15:09:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:08.423 15:09:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.423 15:09:37 -- common/autotest_common.sh@10 -- # set +x 00:11:08.423 15:09:37 -- nvmf/common.sh@469 -- # nvmfpid=66444 00:11:08.423 15:09:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.423 15:09:37 -- nvmf/common.sh@470 -- # waitforlisten 66444 00:11:08.423 15:09:37 -- common/autotest_common.sh@829 -- # '[' -z 66444 ']' 00:11:08.423 15:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.423 15:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.423 15:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.423 15:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.423 15:09:37 -- common/autotest_common.sh@10 -- # set +x 00:11:08.423 [2024-11-06 15:09:37.439280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.423 [2024-11-06 15:09:37.439554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.423 [2024-11-06 15:09:37.584849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.423 [2024-11-06 15:09:37.637036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.423 [2024-11-06 15:09:37.637414] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.423 [2024-11-06 15:09:37.637467] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.423 [2024-11-06 15:09:37.637676] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.423 [2024-11-06 15:09:37.637865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.424 [2024-11-06 15:09:37.638099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.424 [2024-11-06 15:09:37.637966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.424 [2024-11-06 15:09:37.638102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.359 15:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.359 15:09:38 -- common/autotest_common.sh@862 -- # return 0 00:11:09.359 15:09:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:09.359 15:09:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 15:09:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.359 15:09:38 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 [2024-11-06 15:09:38.483450] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@21 -- # seq 1 11 00:11:09.359 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.359 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 Malloc1 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 [2024-11-06 15:09:38.565996] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.359 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 Malloc2 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.359 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:11:09.359 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc3 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.619 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc4 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.619 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc5 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.619 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc6 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.619 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc7 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.619 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc8 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.619 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.619 Malloc9 00:11:09.619 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.619 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:11:09.619 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.619 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.620 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.878 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.878 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.878 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.878 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.878 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.878 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.878 Malloc10 00:11:09.878 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.878 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.878 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.878 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.878 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.878 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.878 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.878 15:09:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.878 15:09:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:11:09.878 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.879 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.879 Malloc11 00:11:09.879 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.879 15:09:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:11:09.879 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.879 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.879 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.879 15:09:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:11:09.879 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.879 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.879 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.879 15:09:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:11:09.879 15:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.879 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.879 15:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.879 15:09:38 -- target/multiconnection.sh@28 -- # seq 1 11 00:11:09.879 15:09:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:09.879 15:09:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.879 15:09:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:11:09.879 15:09:39 -- common/autotest_common.sh@1187 -- # local i=0 00:11:09.879 15:09:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.879 15:09:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:09.879 15:09:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:12.410 15:09:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:12.410 15:09:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:12.410 15:09:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:11:12.410 15:09:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:12.410 15:09:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.410 15:09:41 -- common/autotest_common.sh@1197 -- # return 0 00:11:12.410 15:09:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:12.410 15:09:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:11:12.410 15:09:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:11:12.410 15:09:41 -- common/autotest_common.sh@1187 -- # local i=0 00:11:12.410 15:09:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.410 15:09:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:12.410 15:09:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:14.312 15:09:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:14.312 15:09:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:14.312 15:09:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:11:14.312 15:09:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:14.312 15:09:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.312 15:09:43 -- common/autotest_common.sh@1197 -- # return 0 00:11:14.312 15:09:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:14.312 15:09:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:11:14.312 15:09:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:11:14.312 15:09:43 -- common/autotest_common.sh@1187 -- # local i=0 00:11:14.312 15:09:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.312 15:09:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:14.312 15:09:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:16.215 15:09:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:16.215 15:09:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:16.215 15:09:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:11:16.215 15:09:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:16.215 15:09:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.215 15:09:45 -- common/autotest_common.sh@1197 -- # return 0 00:11:16.215 15:09:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:16.215 15:09:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:11:16.473 15:09:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:11:16.473 15:09:45 -- common/autotest_common.sh@1187 -- # local i=0 00:11:16.473 15:09:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.473 15:09:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:16.473 15:09:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:18.376 15:09:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:18.376 15:09:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:18.376 15:09:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:11:18.376 15:09:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:18.376 15:09:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.376 15:09:47 -- common/autotest_common.sh@1197 -- # return 0 00:11:18.376 15:09:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:18.376 15:09:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:11:18.634 15:09:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:11:18.634 15:09:47 -- common/autotest_common.sh@1187 -- # local i=0 00:11:18.634 15:09:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.634 15:09:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:18.634 15:09:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:20.536 15:09:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:20.536 15:09:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:20.536 15:09:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:11:20.536 15:09:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:20.536 15:09:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.536 15:09:49 -- common/autotest_common.sh@1197 -- # return 0 00:11:20.536 15:09:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:20.536 15:09:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:11:20.794 15:09:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:11:20.794 15:09:49 -- common/autotest_common.sh@1187 -- # local i=0 00:11:20.794 15:09:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.794 15:09:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:20.794 15:09:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:22.698 15:09:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:22.698 15:09:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:22.698 15:09:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:11:22.698 15:09:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:22.698 15:09:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.698 15:09:51 -- common/autotest_common.sh@1197 -- # return 0 00:11:22.698 15:09:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:22.698 15:09:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:11:22.957 15:09:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:11:22.957 15:09:52 -- common/autotest_common.sh@1187 -- # local i=0 00:11:22.957 15:09:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.957 15:09:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:22.957 15:09:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:24.859 15:09:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:24.859 15:09:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:24.859 15:09:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:11:25.117 15:09:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:25.117 15:09:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.117 15:09:54 -- common/autotest_common.sh@1197 -- # return 0 00:11:25.117 15:09:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:25.117 15:09:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:11:25.117 15:09:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:11:25.117 15:09:54 -- common/autotest_common.sh@1187 -- # local i=0 00:11:25.117 15:09:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.117 15:09:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:25.117 15:09:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:27.068 15:09:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:27.068 15:09:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:27.068 15:09:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:11:27.068 15:09:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:27.068 15:09:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.068 15:09:56 -- common/autotest_common.sh@1197 -- # return 0 00:11:27.068 15:09:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:27.068 15:09:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:11:27.327 15:09:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:11:27.327 15:09:56 -- common/autotest_common.sh@1187 -- # local i=0 00:11:27.327 15:09:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.327 15:09:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:27.327 15:09:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:29.229 15:09:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:29.229 15:09:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:29.229 15:09:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:11:29.229 15:09:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:29.229 15:09:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.229 15:09:58 -- common/autotest_common.sh@1197 -- # return 0 00:11:29.229 15:09:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.229 15:09:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:11:29.488 15:09:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:11:29.488 15:09:58 -- common/autotest_common.sh@1187 -- # local i=0 00:11:29.488 15:09:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.488 15:09:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:29.488 15:09:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:31.391 15:10:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:31.391 15:10:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:31.391 15:10:00 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:11:31.649 15:10:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:31.649 15:10:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.649 15:10:00 -- common/autotest_common.sh@1197 -- # return 0 00:11:31.649 15:10:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:31.650 15:10:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:11:31.650 15:10:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:11:31.650 15:10:00 -- common/autotest_common.sh@1187 -- # local i=0 00:11:31.650 15:10:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.650 15:10:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:31.650 15:10:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:34.182 15:10:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:34.182 15:10:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:34.182 15:10:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:11:34.182 15:10:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:34.182 15:10:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.182 15:10:02 -- common/autotest_common.sh@1197 -- # return 0 00:11:34.182 15:10:02 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:11:34.182 [global] 00:11:34.182 thread=1 00:11:34.182 invalidate=1 00:11:34.182 rw=read 00:11:34.182 time_based=1 00:11:34.182 runtime=10 00:11:34.182 ioengine=libaio 00:11:34.182 direct=1 00:11:34.182 bs=262144 00:11:34.182 iodepth=64 00:11:34.182 norandommap=1 00:11:34.182 numjobs=1 00:11:34.182 00:11:34.182 [job0] 00:11:34.182 filename=/dev/nvme0n1 00:11:34.182 [job1] 00:11:34.182 filename=/dev/nvme10n1 00:11:34.182 [job2] 00:11:34.182 filename=/dev/nvme1n1 00:11:34.182 [job3] 00:11:34.182 filename=/dev/nvme2n1 00:11:34.182 [job4] 00:11:34.182 filename=/dev/nvme3n1 00:11:34.182 [job5] 00:11:34.182 filename=/dev/nvme4n1 00:11:34.182 [job6] 00:11:34.182 filename=/dev/nvme5n1 00:11:34.182 [job7] 00:11:34.182 filename=/dev/nvme6n1 00:11:34.182 [job8] 00:11:34.182 filename=/dev/nvme7n1 00:11:34.182 [job9] 00:11:34.182 filename=/dev/nvme8n1 00:11:34.182 [job10] 00:11:34.182 filename=/dev/nvme9n1 00:11:34.182 Could not set queue depth (nvme0n1) 00:11:34.182 Could not set queue depth (nvme10n1) 00:11:34.182 Could not set queue depth (nvme1n1) 00:11:34.182 Could not set queue depth (nvme2n1) 00:11:34.182 Could not set queue depth (nvme3n1) 00:11:34.182 Could not set queue depth (nvme4n1) 00:11:34.182 Could not set queue depth (nvme5n1) 00:11:34.182 Could not set queue depth (nvme6n1) 00:11:34.182 Could not set queue depth (nvme7n1) 00:11:34.182 Could not set queue depth (nvme8n1) 00:11:34.182 Could not set queue depth (nvme9n1) 00:11:34.182 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:34.182 fio-3.35 00:11:34.182 Starting 11 threads 00:11:46.391 00:11:46.391 job0: (groupid=0, jobs=1): err= 0: pid=66915: Wed Nov 6 15:10:13 2024 00:11:46.391 read: IOPS=1010, BW=253MiB/s (265MB/s)(2529MiB/10016msec) 00:11:46.391 slat (usec): min=20, max=25998, avg=983.76, stdev=2115.80 00:11:46.391 clat (usec): min=12804, max=93617, avg=62297.20, stdev=6137.29 00:11:46.391 lat (usec): min=14095, max=93660, avg=63280.96, stdev=6144.82 00:11:46.391 clat percentiles (usec): 00:11:46.391 | 1.00th=[46924], 5.00th=[53216], 10.00th=[55313], 20.00th=[57934], 00:11:46.391 | 30.00th=[59507], 40.00th=[61080], 50.00th=[62129], 60.00th=[63701], 00:11:46.391 | 70.00th=[65274], 80.00th=[66847], 90.00th=[69731], 95.00th=[71828], 00:11:46.391 | 99.00th=[77071], 99.50th=[79168], 99.90th=[86508], 99.95th=[90702], 00:11:46.391 | 99.99th=[93848] 00:11:46.391 bw ( KiB/s): min=236544, max=266240, per=15.97%, avg=257408.20, stdev=6161.60, samples=20 00:11:46.391 iops : min= 924, max= 1040, avg=1005.45, stdev=24.03, samples=20 00:11:46.391 lat (msec) : 20=0.11%, 50=1.72%, 100=98.17% 00:11:46.391 cpu : usr=0.48%, sys=3.86%, ctx=2236, majf=0, minf=4097 00:11:46.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:46.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.391 issued rwts: total=10117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.391 job1: (groupid=0, jobs=1): err= 0: pid=66916: Wed Nov 6 15:10:13 2024 00:11:46.391 read: IOPS=325, BW=81.3MiB/s (85.3MB/s)(823MiB/10111msec) 00:11:46.391 slat (usec): min=21, max=121564, avg=3005.43, stdev=12314.93 00:11:46.391 clat (msec): min=10, max=303, avg=193.42, stdev=31.13 00:11:46.391 lat (msec): min=12, max=368, avg=196.43, stdev=33.71 00:11:46.391 clat percentiles (msec): 00:11:46.391 | 1.00th=[ 35], 5.00th=[ 159], 10.00th=[ 176], 20.00th=[ 194], 00:11:46.391 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 199], 60.00th=[ 201], 00:11:46.391 | 70.00th=[ 201], 80.00th=[ 205], 90.00th=[ 207], 95.00th=[ 211], 00:11:46.391 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:11:46.391 | 99.99th=[ 305] 00:11:46.391 bw ( KiB/s): min=66560, max=103936, per=5.13%, avg=82611.20, stdev=10279.37, samples=20 00:11:46.391 iops : min= 260, max= 406, avg=322.70, stdev=40.15, samples=20 00:11:46.391 lat (msec) : 20=0.27%, 50=1.37%, 100=1.91%, 250=94.71%, 500=1.73% 00:11:46.391 cpu : usr=0.23%, sys=1.35%, ctx=792, majf=0, minf=4097 00:11:46.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:46.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.391 issued rwts: total=3290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.391 job2: (groupid=0, jobs=1): err= 0: pid=66917: Wed Nov 6 15:10:13 2024 00:11:46.391 read: IOPS=349, BW=87.4MiB/s (91.7MB/s)(885MiB/10126msec) 00:11:46.391 slat (usec): min=21, max=103294, avg=2809.62, stdev=10888.75 00:11:46.391 clat (msec): min=18, max=321, avg=179.98, stdev=47.61 00:11:46.391 lat (msec): min=18, max=321, avg=182.79, stdev=49.26 00:11:46.391 clat percentiles (msec): 00:11:46.391 | 1.00th=[ 47], 5.00th=[ 73], 10.00th=[ 91], 20.00th=[ 188], 00:11:46.391 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 199], 00:11:46.391 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 211], 00:11:46.391 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 321], 00:11:46.391 | 99.99th=[ 321] 00:11:46.391 bw ( KiB/s): min=67072, max=171350, per=5.52%, avg=89028.60, stdev=27495.98, samples=20 00:11:46.391 iops : min= 262, max= 669, avg=347.70, stdev=107.38, samples=20 00:11:46.391 lat (msec) : 20=0.08%, 50=1.10%, 100=12.43%, 250=84.55%, 500=1.84% 00:11:46.391 cpu : usr=0.21%, sys=1.33%, ctx=837, majf=0, minf=4097 00:11:46.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:11:46.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.391 issued rwts: total=3541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.391 job3: (groupid=0, jobs=1): err= 0: pid=66918: Wed Nov 6 15:10:13 2024 00:11:46.391 read: IOPS=319, BW=80.0MiB/s (83.8MB/s)(810MiB/10133msec) 00:11:46.391 slat (usec): min=20, max=114079, avg=3089.05, stdev=11078.42 00:11:46.391 clat (msec): min=54, max=327, avg=196.78, stdev=22.95 00:11:46.391 lat (msec): min=55, max=327, avg=199.86, stdev=25.23 00:11:46.391 clat percentiles (msec): 00:11:46.391 | 1.00th=[ 93], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 194], 00:11:46.391 | 30.00th=[ 197], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 201], 00:11:46.391 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 220], 00:11:46.391 | 99.00th=[ 266], 99.50th=[ 284], 99.90th=[ 313], 99.95th=[ 313], 00:11:46.391 | 99.99th=[ 330] 00:11:46.391 bw ( KiB/s): min=69632, max=96768, per=5.05%, avg=81348.45, stdev=8414.97, samples=20 00:11:46.391 iops : min= 272, max= 378, avg=317.65, stdev=32.84, samples=20 00:11:46.391 lat (msec) : 100=1.94%, 250=95.43%, 500=2.62% 00:11:46.391 cpu : usr=0.13%, sys=1.43%, ctx=831, majf=0, minf=4097 00:11:46.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:46.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.391 issued rwts: total=3241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.391 job4: (groupid=0, jobs=1): err= 0: pid=66919: Wed Nov 6 15:10:13 2024 00:11:46.391 read: IOPS=1010, BW=253MiB/s (265MB/s)(2529MiB/10013msec) 00:11:46.391 slat (usec): min=20, max=22661, avg=983.55, stdev=2105.65 00:11:46.391 clat (usec): min=10827, max=88381, avg=62310.63, stdev=5830.01 00:11:46.391 lat (usec): min=15108, max=88451, avg=63294.18, stdev=5829.87 00:11:46.391 clat percentiles (usec): 00:11:46.391 | 1.00th=[49021], 5.00th=[53740], 10.00th=[55837], 20.00th=[57934], 00:11:46.391 | 30.00th=[59507], 40.00th=[61080], 50.00th=[62129], 60.00th=[63701], 00:11:46.391 | 70.00th=[65274], 80.00th=[66847], 90.00th=[69731], 95.00th=[71828], 00:11:46.391 | 99.00th=[76022], 99.50th=[78119], 99.90th=[81265], 99.95th=[83362], 00:11:46.391 | 99.99th=[88605] 00:11:46.391 bw ( KiB/s): min=229376, max=267776, per=15.97%, avg=257304.90, stdev=8099.42, samples=20 00:11:46.391 iops : min= 896, max= 1046, avg=1005.00, stdev=31.63, samples=20 00:11:46.391 lat (msec) : 20=0.10%, 50=1.31%, 100=98.60% 00:11:46.391 cpu : usr=0.51%, sys=4.13%, ctx=2216, majf=0, minf=4097 00:11:46.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:46.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.391 issued rwts: total=10114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.391 job5: (groupid=0, jobs=1): err= 0: pid=66920: Wed Nov 6 15:10:13 2024 00:11:46.391 read: IOPS=1030, BW=258MiB/s (270MB/s)(2586MiB/10039msec) 00:11:46.391 slat (usec): min=18, max=51408, avg=959.14, stdev=2202.41 00:11:46.391 clat (msec): min=5, max=123, avg=61.10, stdev=10.98 00:11:46.391 lat (msec): min=6, max=123, avg=62.05, stdev=11.07 00:11:46.392 clat percentiles (msec): 00:11:46.392 | 1.00th=[ 23], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 57], 00:11:46.392 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:11:46.392 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 81], 00:11:46.392 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 118], 99.95th=[ 118], 00:11:46.392 | 99.99th=[ 118] 00:11:46.392 bw ( KiB/s): min=182272, max=329216, per=16.33%, avg=263123.20, stdev=29551.13, samples=20 00:11:46.392 iops : min= 712, max= 1286, avg=1027.80, stdev=115.50, samples=20 00:11:46.392 lat (msec) : 10=0.07%, 20=0.65%, 50=3.44%, 100=94.25%, 250=1.60% 00:11:46.392 cpu : usr=0.57%, sys=3.89%, ctx=2243, majf=0, minf=4097 00:11:46.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.392 issued rwts: total=10342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.392 job6: (groupid=0, jobs=1): err= 0: pid=66921: Wed Nov 6 15:10:13 2024 00:11:46.392 read: IOPS=319, BW=79.9MiB/s (83.8MB/s)(810MiB/10141msec) 00:11:46.392 slat (usec): min=21, max=89992, avg=3101.73, stdev=9041.32 00:11:46.392 clat (msec): min=18, max=307, avg=196.89, stdev=20.49 00:11:46.392 lat (msec): min=19, max=349, avg=199.99, stdev=21.99 00:11:46.392 clat percentiles (msec): 00:11:46.392 | 1.00th=[ 144], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 194], 00:11:46.392 | 30.00th=[ 197], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 201], 00:11:46.392 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 220], 00:11:46.392 | 99.00th=[ 253], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 309], 00:11:46.392 | 99.99th=[ 309] 00:11:46.392 bw ( KiB/s): min=72704, max=94720, per=5.05%, avg=81331.20, stdev=5934.45, samples=20 00:11:46.392 iops : min= 284, max= 370, avg=317.70, stdev=23.18, samples=20 00:11:46.392 lat (msec) : 20=0.03%, 50=0.03%, 100=0.71%, 250=98.06%, 500=1.17% 00:11:46.392 cpu : usr=0.17%, sys=1.20%, ctx=778, majf=0, minf=4097 00:11:46.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.392 issued rwts: total=3241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.392 job7: (groupid=0, jobs=1): err= 0: pid=66922: Wed Nov 6 15:10:13 2024 00:11:46.392 read: IOPS=318, BW=79.5MiB/s (83.4MB/s)(806MiB/10135msec) 00:11:46.392 slat (usec): min=20, max=115047, avg=3114.10, stdev=10055.62 00:11:46.392 clat (msec): min=80, max=320, avg=197.87, stdev=18.50 00:11:46.392 lat (msec): min=114, max=320, avg=200.98, stdev=20.54 00:11:46.392 clat percentiles (msec): 00:11:46.392 | 1.00th=[ 138], 5.00th=[ 165], 10.00th=[ 186], 20.00th=[ 194], 00:11:46.392 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 199], 60.00th=[ 201], 00:11:46.392 | 70.00th=[ 203], 80.00th=[ 205], 90.00th=[ 209], 95.00th=[ 218], 00:11:46.392 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 321], 00:11:46.392 | 99.99th=[ 321] 00:11:46.392 bw ( KiB/s): min=74752, max=95744, per=5.02%, avg=80886.00, stdev=4536.10, samples=20 00:11:46.392 iops : min= 292, max= 374, avg=315.85, stdev=17.77, samples=20 00:11:46.392 lat (msec) : 100=0.03%, 250=97.95%, 500=2.02% 00:11:46.392 cpu : usr=0.16%, sys=1.40%, ctx=788, majf=0, minf=4097 00:11:46.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:11:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.392 issued rwts: total=3223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.392 job8: (groupid=0, jobs=1): err= 0: pid=66923: Wed Nov 6 15:10:13 2024 00:11:46.392 read: IOPS=317, BW=79.5MiB/s (83.3MB/s)(806MiB/10135msec) 00:11:46.392 slat (usec): min=21, max=94650, avg=3073.79, stdev=9571.07 00:11:46.392 clat (msec): min=64, max=293, avg=197.92, stdev=16.78 00:11:46.392 lat (msec): min=112, max=293, avg=201.00, stdev=18.67 00:11:46.392 clat percentiles (msec): 00:11:46.392 | 1.00th=[ 150], 5.00th=[ 167], 10.00th=[ 180], 20.00th=[ 194], 00:11:46.392 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 199], 60.00th=[ 201], 00:11:46.392 | 70.00th=[ 203], 80.00th=[ 205], 90.00th=[ 209], 95.00th=[ 220], 00:11:46.392 | 99.00th=[ 259], 99.50th=[ 268], 99.90th=[ 284], 99.95th=[ 292], 00:11:46.392 | 99.99th=[ 292] 00:11:46.392 bw ( KiB/s): min=75776, max=92672, per=5.02%, avg=80868.45, stdev=3793.53, samples=20 00:11:46.392 iops : min= 296, max= 362, avg=315.75, stdev=14.86, samples=20 00:11:46.392 lat (msec) : 100=0.03%, 250=98.04%, 500=1.92% 00:11:46.392 cpu : usr=0.17%, sys=1.33%, ctx=830, majf=0, minf=4097 00:11:46.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:11:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.392 issued rwts: total=3222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.392 job9: (groupid=0, jobs=1): err= 0: pid=66924: Wed Nov 6 15:10:13 2024 00:11:46.392 read: IOPS=315, BW=79.0MiB/s (82.8MB/s)(800MiB/10132msec) 00:11:46.392 slat (usec): min=21, max=134737, avg=3119.68, stdev=9613.13 00:11:46.392 clat (msec): min=44, max=325, avg=199.25, stdev=20.66 00:11:46.392 lat (msec): min=45, max=335, avg=202.37, stdev=22.16 00:11:46.392 clat percentiles (msec): 00:11:46.392 | 1.00th=[ 155], 5.00th=[ 171], 10.00th=[ 190], 20.00th=[ 194], 00:11:46.392 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 199], 60.00th=[ 201], 00:11:46.392 | 70.00th=[ 203], 80.00th=[ 205], 90.00th=[ 209], 95.00th=[ 224], 00:11:46.392 | 99.00th=[ 279], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 326], 00:11:46.392 | 99.99th=[ 326] 00:11:46.392 bw ( KiB/s): min=70656, max=94208, per=4.98%, avg=80307.20, stdev=6204.95, samples=20 00:11:46.392 iops : min= 276, max= 368, avg=313.70, stdev=24.24, samples=20 00:11:46.392 lat (msec) : 50=0.50%, 250=97.06%, 500=2.44% 00:11:46.392 cpu : usr=0.14%, sys=1.41%, ctx=773, majf=0, minf=4097 00:11:46.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:11:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.392 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.392 job10: (groupid=0, jobs=1): err= 0: pid=66925: Wed Nov 6 15:10:13 2024 00:11:46.392 read: IOPS=1026, BW=257MiB/s (269MB/s)(2576MiB/10040msec) 00:11:46.392 slat (usec): min=20, max=25537, avg=965.66, stdev=2147.86 00:11:46.392 clat (msec): min=12, max=115, avg=61.29, stdev= 9.32 00:11:46.392 lat (msec): min=13, max=115, avg=62.26, stdev= 9.38 00:11:46.392 clat percentiles (msec): 00:11:46.392 | 1.00th=[ 33], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 57], 00:11:46.392 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:11:46.392 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 78], 00:11:46.392 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 111], 99.95th=[ 111], 00:11:46.392 | 99.99th=[ 115] 00:11:46.392 bw ( KiB/s): min=186228, max=307712, per=16.27%, avg=262188.20, stdev=26605.04, samples=20 00:11:46.392 iops : min= 727, max= 1202, avg=1024.15, stdev=103.99, samples=20 00:11:46.392 lat (msec) : 20=0.11%, 50=4.13%, 100=95.12%, 250=0.64% 00:11:46.392 cpu : usr=0.54%, sys=4.19%, ctx=2243, majf=0, minf=4097 00:11:46.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:46.392 issued rwts: total=10304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:46.392 00:11:46.392 Run status group 0 (all jobs): 00:11:46.392 READ: bw=1574MiB/s (1650MB/s), 79.0MiB/s-258MiB/s (82.8MB/s-270MB/s), io=15.6GiB (16.7GB), run=10013-10141msec 00:11:46.392 00:11:46.392 Disk stats (read/write): 00:11:46.392 nvme0n1: ios=20149/0, merge=0/0, ticks=1236876/0, in_queue=1236876, util=97.84% 00:11:46.392 nvme10n1: ios=6469/0, merge=0/0, ticks=1226821/0, in_queue=1226821, util=98.05% 00:11:46.392 nvme1n1: ios=6962/0, merge=0/0, ticks=1223359/0, in_queue=1223359, util=98.06% 00:11:46.392 nvme2n1: ios=6359/0, merge=0/0, ticks=1221862/0, in_queue=1221862, util=98.23% 00:11:46.392 nvme3n1: ios=20128/0, merge=0/0, ticks=1236903/0, in_queue=1236903, util=98.24% 00:11:46.392 nvme4n1: ios=20585/0, merge=0/0, ticks=1237253/0, in_queue=1237253, util=98.54% 00:11:46.392 nvme5n1: ios=6357/0, merge=0/0, ticks=1224565/0, in_queue=1224565, util=98.69% 00:11:46.392 nvme6n1: ios=6319/0, merge=0/0, ticks=1223083/0, in_queue=1223083, util=98.69% 00:11:46.392 nvme7n1: ios=6317/0, merge=0/0, ticks=1224196/0, in_queue=1224196, util=98.88% 00:11:46.392 nvme8n1: ios=6287/0, merge=0/0, ticks=1219210/0, in_queue=1219210, util=99.09% 00:11:46.392 nvme9n1: ios=20499/0, merge=0/0, ticks=1235763/0, in_queue=1235763, util=99.19% 00:11:46.392 15:10:13 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:11:46.392 [global] 00:11:46.392 thread=1 00:11:46.392 invalidate=1 00:11:46.392 rw=randwrite 00:11:46.392 time_based=1 00:11:46.392 runtime=10 00:11:46.392 ioengine=libaio 00:11:46.392 direct=1 00:11:46.392 bs=262144 00:11:46.392 iodepth=64 00:11:46.392 norandommap=1 00:11:46.392 numjobs=1 00:11:46.392 00:11:46.392 [job0] 00:11:46.392 filename=/dev/nvme0n1 00:11:46.392 [job1] 00:11:46.392 filename=/dev/nvme10n1 00:11:46.392 [job2] 00:11:46.392 filename=/dev/nvme1n1 00:11:46.392 [job3] 00:11:46.392 filename=/dev/nvme2n1 00:11:46.392 [job4] 00:11:46.392 filename=/dev/nvme3n1 00:11:46.392 [job5] 00:11:46.392 filename=/dev/nvme4n1 00:11:46.392 [job6] 00:11:46.392 filename=/dev/nvme5n1 00:11:46.392 [job7] 00:11:46.392 filename=/dev/nvme6n1 00:11:46.392 [job8] 00:11:46.392 filename=/dev/nvme7n1 00:11:46.392 [job9] 00:11:46.392 filename=/dev/nvme8n1 00:11:46.392 [job10] 00:11:46.392 filename=/dev/nvme9n1 00:11:46.392 Could not set queue depth (nvme0n1) 00:11:46.392 Could not set queue depth (nvme10n1) 00:11:46.392 Could not set queue depth (nvme1n1) 00:11:46.393 Could not set queue depth (nvme2n1) 00:11:46.393 Could not set queue depth (nvme3n1) 00:11:46.393 Could not set queue depth (nvme4n1) 00:11:46.393 Could not set queue depth (nvme5n1) 00:11:46.393 Could not set queue depth (nvme6n1) 00:11:46.393 Could not set queue depth (nvme7n1) 00:11:46.393 Could not set queue depth (nvme8n1) 00:11:46.393 Could not set queue depth (nvme9n1) 00:11:46.393 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:46.393 fio-3.35 00:11:46.393 Starting 11 threads 00:11:56.426 00:11:56.426 job0: (groupid=0, jobs=1): err= 0: pid=67119: Wed Nov 6 15:10:24 2024 00:11:56.426 write: IOPS=1046, BW=262MiB/s (274MB/s)(2631MiB/10054msec); 0 zone resets 00:11:56.426 slat (usec): min=17, max=38887, avg=938.17, stdev=1630.22 00:11:56.426 clat (msec): min=10, max=127, avg=60.19, stdev= 9.58 00:11:56.426 lat (msec): min=10, max=127, avg=61.12, stdev= 9.65 00:11:56.426 clat percentiles (msec): 00:11:56.426 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 56], 00:11:56.426 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 59], 00:11:56.426 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 61], 95.00th=[ 89], 00:11:56.426 | 99.00th=[ 96], 99.50th=[ 104], 99.90th=[ 122], 99.95th=[ 126], 00:11:56.426 | 99.99th=[ 128] 00:11:56.426 bw ( KiB/s): min=190464, max=286720, per=18.51%, avg=267801.60, stdev=27287.59, samples=20 00:11:56.426 iops : min= 744, max= 1120, avg=1046.10, stdev=106.59, samples=20 00:11:56.426 lat (msec) : 20=0.11%, 50=0.82%, 100=98.45%, 250=0.62% 00:11:56.426 cpu : usr=1.59%, sys=2.95%, ctx=13958, majf=0, minf=1 00:11:56.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:56.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.426 issued rwts: total=0,10524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.426 job1: (groupid=0, jobs=1): err= 0: pid=67120: Wed Nov 6 15:10:24 2024 00:11:56.426 write: IOPS=479, BW=120MiB/s (126MB/s)(1216MiB/10144msec); 0 zone resets 00:11:56.426 slat (usec): min=18, max=23999, avg=2050.32, stdev=3656.68 00:11:56.426 clat (msec): min=26, max=298, avg=131.34, stdev=34.73 00:11:56.426 lat (msec): min=26, max=298, avg=133.39, stdev=35.07 00:11:56.426 clat percentiles (msec): 00:11:56.426 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 93], 00:11:56.426 | 30.00th=[ 94], 40.00th=[ 117], 50.00th=[ 150], 60.00th=[ 159], 00:11:56.426 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:11:56.426 | 99.00th=[ 178], 99.50th=[ 239], 99.90th=[ 288], 99.95th=[ 288], 00:11:56.426 | 99.99th=[ 300] 00:11:56.426 bw ( KiB/s): min=98816, max=178176, per=8.50%, avg=122931.20, stdev=32713.18, samples=20 00:11:56.426 iops : min= 386, max= 696, avg=480.20, stdev=127.79, samples=20 00:11:56.426 lat (msec) : 50=0.41%, 100=37.92%, 250=61.29%, 500=0.37% 00:11:56.426 cpu : usr=0.88%, sys=1.58%, ctx=5942, majf=0, minf=1 00:11:56.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.426 issued rwts: total=0,4865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.426 job2: (groupid=0, jobs=1): err= 0: pid=67132: Wed Nov 6 15:10:24 2024 00:11:56.426 write: IOPS=468, BW=117MiB/s (123MB/s)(1187MiB/10121msec); 0 zone resets 00:11:56.426 slat (usec): min=16, max=13099, avg=2103.19, stdev=3651.03 00:11:56.426 clat (msec): min=15, max=244, avg=134.33, stdev=21.92 00:11:56.426 lat (msec): min=15, max=244, avg=136.43, stdev=21.96 00:11:56.426 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 121], 20.00th=[ 124], 00:11:56.427 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:11:56.427 | 70.00th=[ 133], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 169], 00:11:56.427 | 99.00th=[ 171], 99.50th=[ 194], 99.90th=[ 236], 99.95th=[ 236], 00:11:56.427 | 99.99th=[ 245] 00:11:56.427 bw ( KiB/s): min=96256, max=145920, per=8.29%, avg=119883.95, stdev=14192.74, samples=20 00:11:56.427 iops : min= 376, max= 570, avg=468.25, stdev=55.45, samples=20 00:11:56.427 lat (msec) : 20=0.08%, 50=0.51%, 100=6.45%, 250=92.96% 00:11:56.427 cpu : usr=0.84%, sys=1.15%, ctx=5105, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.427 issued rwts: total=0,4746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.427 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.427 job3: (groupid=0, jobs=1): err= 0: pid=67133: Wed Nov 6 15:10:24 2024 00:11:56.427 write: IOPS=493, BW=123MiB/s (129MB/s)(1249MiB/10117msec); 0 zone resets 00:11:56.427 slat (usec): min=19, max=60379, avg=1937.69, stdev=3563.65 00:11:56.427 clat (msec): min=7, max=241, avg=127.53, stdev=25.07 00:11:56.427 lat (msec): min=7, max=241, avg=129.47, stdev=25.32 00:11:56.427 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 34], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 123], 00:11:56.427 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:11:56.427 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 157], 95.00th=[ 161], 00:11:56.427 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 234], 99.95th=[ 234], 00:11:56.427 | 99.99th=[ 243] 00:11:56.427 bw ( KiB/s): min=102400, max=189440, per=8.73%, avg=126324.30, stdev=20823.68, samples=20 00:11:56.427 iops : min= 400, max= 740, avg=493.45, stdev=81.34, samples=20 00:11:56.427 lat (msec) : 10=0.06%, 20=0.36%, 50=1.60%, 100=12.59%, 250=85.39% 00:11:56.427 cpu : usr=0.77%, sys=1.57%, ctx=6298, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.427 issued rwts: total=0,4997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.427 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.427 job4: (groupid=0, jobs=1): err= 0: pid=67134: Wed Nov 6 15:10:24 2024 00:11:56.427 write: IOPS=399, BW=99.9MiB/s (105MB/s)(1013MiB/10139msec); 0 zone resets 00:11:56.427 slat (usec): min=19, max=55836, avg=2430.03, stdev=4307.06 00:11:56.427 clat (msec): min=16, max=296, avg=157.73, stdev=18.86 00:11:56.427 lat (msec): min=18, max=296, avg=160.16, stdev=18.74 00:11:56.427 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 63], 5.00th=[ 136], 10.00th=[ 150], 20.00th=[ 153], 00:11:56.427 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 161], 00:11:56.427 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 167], 95.00th=[ 169], 00:11:56.427 | 99.00th=[ 197], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:11:56.427 | 99.99th=[ 296] 00:11:56.427 bw ( KiB/s): min=96256, max=125440, per=7.05%, avg=102046.85, stdev=5901.36, samples=20 00:11:56.427 iops : min= 376, max= 490, avg=398.60, stdev=23.05, samples=20 00:11:56.427 lat (msec) : 20=0.05%, 50=0.57%, 100=1.46%, 250=97.48%, 500=0.44% 00:11:56.427 cpu : usr=0.62%, sys=1.03%, ctx=5307, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.427 issued rwts: total=0,4050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.427 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.427 job5: (groupid=0, jobs=1): err= 0: pid=67135: Wed Nov 6 15:10:24 2024 00:11:56.427 write: IOPS=475, BW=119MiB/s (125MB/s)(1202MiB/10112msec); 0 zone resets 00:11:56.427 slat (usec): min=18, max=29075, avg=2075.06, stdev=3616.26 00:11:56.427 clat (msec): min=17, max=237, avg=132.51, stdev=19.03 00:11:56.427 lat (msec): min=17, max=237, avg=134.58, stdev=18.98 00:11:56.427 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 103], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 122], 00:11:56.427 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 127], 00:11:56.427 | 70.00th=[ 128], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 169], 00:11:56.427 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 230], 99.95th=[ 230], 00:11:56.427 | 99.99th=[ 239] 00:11:56.427 bw ( KiB/s): min=98304, max=131584, per=8.39%, avg=121446.40, stdev=13055.18, samples=20 00:11:56.427 iops : min= 384, max= 514, avg=474.40, stdev=51.00, samples=20 00:11:56.427 lat (msec) : 20=0.04%, 50=0.33%, 100=0.60%, 250=99.02% 00:11:56.427 cpu : usr=0.67%, sys=1.40%, ctx=5393, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.427 issued rwts: total=0,4807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.427 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.427 job6: (groupid=0, jobs=1): err= 0: pid=67136: Wed Nov 6 15:10:24 2024 00:11:56.427 write: IOPS=476, BW=119MiB/s (125MB/s)(1205MiB/10111msec); 0 zone resets 00:11:56.427 slat (usec): min=18, max=29777, avg=2068.88, stdev=3599.51 00:11:56.427 clat (msec): min=17, max=236, avg=132.13, stdev=17.97 00:11:56.427 lat (msec): min=17, max=236, avg=134.20, stdev=17.89 00:11:56.427 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 101], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 123], 00:11:56.427 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 127], 00:11:56.427 | 70.00th=[ 129], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 161], 00:11:56.427 | 99.00th=[ 174], 99.50th=[ 190], 99.90th=[ 228], 99.95th=[ 228], 00:11:56.427 | 99.99th=[ 236] 00:11:56.427 bw ( KiB/s): min=102400, max=131584, per=8.42%, avg=121753.60, stdev=12304.38, samples=20 00:11:56.427 iops : min= 400, max= 514, avg=475.60, stdev=48.06, samples=20 00:11:56.427 lat (msec) : 20=0.08%, 50=0.33%, 100=0.58%, 250=99.00% 00:11:56.427 cpu : usr=0.84%, sys=1.54%, ctx=4883, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.427 issued rwts: total=0,4819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.427 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.427 job7: (groupid=0, jobs=1): err= 0: pid=67137: Wed Nov 6 15:10:24 2024 00:11:56.427 write: IOPS=468, BW=117MiB/s (123MB/s)(1186MiB/10126msec); 0 zone resets 00:11:56.427 slat (usec): min=19, max=26789, avg=2103.69, stdev=3656.92 00:11:56.427 clat (msec): min=10, max=251, avg=134.48, stdev=22.47 00:11:56.427 lat (msec): min=10, max=251, avg=136.58, stdev=22.52 00:11:56.427 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 74], 5.00th=[ 96], 10.00th=[ 121], 20.00th=[ 124], 00:11:56.427 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:11:56.427 | 70.00th=[ 133], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 169], 00:11:56.427 | 99.00th=[ 171], 99.50th=[ 203], 99.90th=[ 243], 99.95th=[ 243], 00:11:56.427 | 99.99th=[ 251] 00:11:56.427 bw ( KiB/s): min=96256, max=147238, per=8.29%, avg=119905.60, stdev=14280.23, samples=20 00:11:56.427 iops : min= 376, max= 575, avg=467.95, stdev=55.83, samples=20 00:11:56.427 lat (msec) : 20=0.27%, 50=0.34%, 100=6.60%, 250=92.77%, 500=0.02% 00:11:56.427 cpu : usr=0.85%, sys=1.36%, ctx=5809, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.427 issued rwts: total=0,4743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.427 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.427 job8: (groupid=0, jobs=1): err= 0: pid=67138: Wed Nov 6 15:10:24 2024 00:11:56.427 write: IOPS=404, BW=101MiB/s (106MB/s)(1025MiB/10137msec); 0 zone resets 00:11:56.427 slat (usec): min=19, max=58144, avg=2373.21, stdev=4275.44 00:11:56.427 clat (msec): min=13, max=289, avg=155.76, stdev=21.90 00:11:56.427 lat (msec): min=13, max=289, avg=158.14, stdev=21.98 00:11:56.427 clat percentiles (msec): 00:11:56.427 | 1.00th=[ 48], 5.00th=[ 122], 10.00th=[ 150], 20.00th=[ 153], 00:11:56.427 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 161], 00:11:56.427 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 169], 00:11:56.427 | 99.00th=[ 188], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:11:56.427 | 99.99th=[ 292] 00:11:56.427 bw ( KiB/s): min=96256, max=126976, per=7.14%, avg=103362.55, stdev=7969.59, samples=20 00:11:56.427 iops : min= 376, max= 496, avg=403.75, stdev=31.13, samples=20 00:11:56.427 lat (msec) : 20=0.07%, 50=1.10%, 100=2.10%, 250=96.39%, 500=0.34% 00:11:56.427 cpu : usr=0.58%, sys=1.10%, ctx=5467, majf=0, minf=1 00:11:56.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.428 issued rwts: total=0,4101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.428 job9: (groupid=0, jobs=1): err= 0: pid=67139: Wed Nov 6 15:10:24 2024 00:11:56.428 write: IOPS=479, BW=120MiB/s (126MB/s)(1217MiB/10143msec); 0 zone resets 00:11:56.428 slat (usec): min=18, max=17911, avg=2049.39, stdev=3649.32 00:11:56.428 clat (msec): min=16, max=299, avg=131.25, stdev=35.06 00:11:56.428 lat (msec): min=16, max=299, avg=133.30, stdev=35.41 00:11:56.428 clat percentiles (msec): 00:11:56.428 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 93], 00:11:56.428 | 30.00th=[ 94], 40.00th=[ 120], 50.00th=[ 153], 60.00th=[ 159], 00:11:56.428 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:11:56.428 | 99.00th=[ 178], 99.50th=[ 239], 99.90th=[ 288], 99.95th=[ 288], 00:11:56.428 | 99.99th=[ 300] 00:11:56.428 bw ( KiB/s): min=98816, max=178176, per=8.50%, avg=123008.00, stdev=32706.84, samples=20 00:11:56.428 iops : min= 386, max= 696, avg=480.50, stdev=127.76, samples=20 00:11:56.428 lat (msec) : 20=0.16%, 50=0.41%, 100=37.96%, 250=61.09%, 500=0.37% 00:11:56.428 cpu : usr=0.84%, sys=1.42%, ctx=5956, majf=0, minf=1 00:11:56.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.428 issued rwts: total=0,4868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.428 job10: (groupid=0, jobs=1): err= 0: pid=67140: Wed Nov 6 15:10:24 2024 00:11:56.428 write: IOPS=475, BW=119MiB/s (125MB/s)(1203MiB/10112msec); 0 zone resets 00:11:56.428 slat (usec): min=18, max=29752, avg=2074.33, stdev=3617.42 00:11:56.428 clat (msec): min=15, max=230, avg=132.42, stdev=18.89 00:11:56.428 lat (msec): min=15, max=231, avg=134.49, stdev=18.84 00:11:56.428 clat percentiles (msec): 00:11:56.428 | 1.00th=[ 100], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 122], 00:11:56.428 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 127], 00:11:56.428 | 70.00th=[ 128], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 167], 00:11:56.428 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:11:56.428 | 99.99th=[ 232] 00:11:56.428 bw ( KiB/s): min=97792, max=133120, per=8.40%, avg=121510.30, stdev=13215.27, samples=20 00:11:56.428 iops : min= 382, max= 520, avg=474.60, stdev=51.59, samples=20 00:11:56.428 lat (msec) : 20=0.08%, 50=0.33%, 100=0.60%, 250=98.98% 00:11:56.428 cpu : usr=0.75%, sys=1.56%, ctx=6194, majf=0, minf=1 00:11:56.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:56.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:56.428 issued rwts: total=0,4810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:56.428 00:11:56.428 Run status group 0 (all jobs): 00:11:56.428 WRITE: bw=1413MiB/s (1482MB/s), 99.9MiB/s-262MiB/s (105MB/s-274MB/s), io=14.0GiB (15.0GB), run=10054-10144msec 00:11:56.428 00:11:56.428 Disk stats (read/write): 00:11:56.428 nvme0n1: ios=50/20873, merge=0/0, ticks=43/1215725, in_queue=1215768, util=97.86% 00:11:56.428 nvme10n1: ios=49/9594, merge=0/0, ticks=76/1212089, in_queue=1212165, util=98.09% 00:11:56.428 nvme1n1: ios=37/9348, merge=0/0, ticks=28/1214023, in_queue=1214051, util=98.03% 00:11:56.428 nvme2n1: ios=23/9845, merge=0/0, ticks=56/1213235, in_queue=1213291, util=98.01% 00:11:56.428 nvme3n1: ios=20/7961, merge=0/0, ticks=20/1212657, in_queue=1212677, util=98.04% 00:11:56.428 nvme4n1: ios=0/9484, merge=0/0, ticks=0/1214567, in_queue=1214567, util=98.31% 00:11:56.428 nvme5n1: ios=0/9504, merge=0/0, ticks=0/1213545, in_queue=1213545, util=98.35% 00:11:56.428 nvme6n1: ios=0/9360, merge=0/0, ticks=0/1215488, in_queue=1215488, util=98.58% 00:11:56.428 nvme7n1: ios=0/8055, merge=0/0, ticks=0/1211898, in_queue=1211898, util=98.60% 00:11:56.428 nvme8n1: ios=0/9601, merge=0/0, ticks=0/1212453, in_queue=1212453, util=98.84% 00:11:56.428 nvme9n1: ios=0/9478, merge=0/0, ticks=0/1213141, in_queue=1213141, util=98.90% 00:11:56.428 15:10:24 -- target/multiconnection.sh@36 -- # sync 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # seq 1 11 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.428 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.428 15:10:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:11:56.428 15:10:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:11:56.428 15:10:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.428 15:10:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.428 15:10:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.428 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:11:56.428 15:10:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.428 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:11:56.428 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:11:56.428 15:10:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:11:56.428 15:10:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:11:56.428 15:10:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.428 15:10:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:56.428 15:10:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.428 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:11:56.428 15:10:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.428 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:11:56.428 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:11:56.428 15:10:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:11:56.428 15:10:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:11:56.428 15:10:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.428 15:10:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:56.428 15:10:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.428 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:11:56.428 15:10:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.428 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:11:56.428 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:11:56.428 15:10:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:11:56.428 15:10:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:11:56.428 15:10:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.428 15:10:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:56.428 15:10:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.428 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:11:56.428 15:10:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.428 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:11:56.428 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:11:56.428 15:10:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:11:56.428 15:10:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:11:56.428 15:10:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.428 15:10:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:11:56.428 15:10:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.428 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:11:56.428 15:10:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.428 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.428 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:11:56.428 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:11:56.428 15:10:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:11:56.428 15:10:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.428 15:10:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:11:56.429 15:10:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.429 15:10:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:11:56.429 15:10:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.429 15:10:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:11:56.429 15:10:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.429 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 15:10:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.429 15:10:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.429 15:10:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:11:56.429 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:11:56.429 15:10:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:11:56.429 15:10:25 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:11:56.429 15:10:25 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.429 15:10:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:11:56.429 15:10:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.429 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 15:10:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.429 15:10:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.429 15:10:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:11:56.429 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:11:56.429 15:10:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:11:56.429 15:10:25 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:11:56.429 15:10:25 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.429 15:10:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:11:56.429 15:10:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.429 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 15:10:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.429 15:10:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.429 15:10:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:11:56.429 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:11:56.429 15:10:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:11:56.429 15:10:25 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:11:56.429 15:10:25 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.429 15:10:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:11:56.429 15:10:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.429 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 15:10:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.429 15:10:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.429 15:10:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:11:56.429 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:11:56.429 15:10:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:11:56.429 15:10:25 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:11:56.429 15:10:25 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.429 15:10:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:11:56.429 15:10:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.429 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 15:10:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.429 15:10:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.429 15:10:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:11:56.429 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:11:56.429 15:10:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:11:56.429 15:10:25 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.429 15:10:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:11:56.429 15:10:25 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.429 15:10:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:11:56.429 15:10:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.429 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.429 15:10:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.429 15:10:25 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:11:56.429 15:10:25 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:56.429 15:10:25 -- target/multiconnection.sh@47 -- # nvmftestfini 00:11:56.429 15:10:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:56.429 15:10:25 -- nvmf/common.sh@116 -- # sync 00:11:56.429 15:10:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:56.429 15:10:25 -- nvmf/common.sh@119 -- # set +e 00:11:56.429 15:10:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:56.429 15:10:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:56.429 rmmod nvme_tcp 00:11:56.429 rmmod nvme_fabrics 00:11:56.429 rmmod nvme_keyring 00:11:56.429 15:10:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:56.429 15:10:25 -- nvmf/common.sh@123 -- # set -e 00:11:56.429 15:10:25 -- nvmf/common.sh@124 -- # return 0 00:11:56.429 15:10:25 -- nvmf/common.sh@477 -- # '[' -n 66444 ']' 00:11:56.429 15:10:25 -- nvmf/common.sh@478 -- # killprocess 66444 00:11:56.429 15:10:25 -- common/autotest_common.sh@936 -- # '[' -z 66444 ']' 00:11:56.429 15:10:25 -- common/autotest_common.sh@940 -- # kill -0 66444 00:11:56.429 15:10:25 -- common/autotest_common.sh@941 -- # uname 00:11:56.429 15:10:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.429 15:10:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66444 00:11:56.429 killing process with pid 66444 00:11:56.429 15:10:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:56.429 15:10:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:56.429 15:10:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66444' 00:11:56.429 15:10:25 -- common/autotest_common.sh@955 -- # kill 66444 00:11:56.429 15:10:25 -- common/autotest_common.sh@960 -- # wait 66444 00:11:56.689 15:10:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:56.689 15:10:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:56.689 15:10:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:56.689 15:10:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.689 15:10:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:56.689 15:10:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.689 15:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.689 15:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.689 15:10:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:56.689 00:11:56.689 real 0m49.069s 00:11:56.689 user 2m38.538s 00:11:56.689 sys 0m36.752s 00:11:56.689 ************************************ 00:11:56.689 END TEST nvmf_multiconnection 00:11:56.689 ************************************ 00:11:56.689 15:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.689 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.689 15:10:25 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:11:56.689 15:10:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:56.689 15:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.689 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:11:56.689 ************************************ 00:11:56.689 START TEST nvmf_initiator_timeout 00:11:56.689 ************************************ 00:11:56.689 15:10:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:11:56.689 * Looking for test storage... 00:11:56.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.949 15:10:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:56.949 15:10:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:56.949 15:10:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:56.949 15:10:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:56.949 15:10:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:56.949 15:10:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:56.949 15:10:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:56.949 15:10:26 -- scripts/common.sh@335 -- # IFS=.-: 00:11:56.949 15:10:26 -- scripts/common.sh@335 -- # read -ra ver1 00:11:56.949 15:10:26 -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.949 15:10:26 -- scripts/common.sh@336 -- # read -ra ver2 00:11:56.949 15:10:26 -- scripts/common.sh@337 -- # local 'op=<' 00:11:56.949 15:10:26 -- scripts/common.sh@339 -- # ver1_l=2 00:11:56.949 15:10:26 -- scripts/common.sh@340 -- # ver2_l=1 00:11:56.949 15:10:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:56.949 15:10:26 -- scripts/common.sh@343 -- # case "$op" in 00:11:56.949 15:10:26 -- scripts/common.sh@344 -- # : 1 00:11:56.949 15:10:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:56.949 15:10:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.949 15:10:26 -- scripts/common.sh@364 -- # decimal 1 00:11:56.949 15:10:26 -- scripts/common.sh@352 -- # local d=1 00:11:56.949 15:10:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.949 15:10:26 -- scripts/common.sh@354 -- # echo 1 00:11:56.949 15:10:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:56.949 15:10:26 -- scripts/common.sh@365 -- # decimal 2 00:11:56.949 15:10:26 -- scripts/common.sh@352 -- # local d=2 00:11:56.949 15:10:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.949 15:10:26 -- scripts/common.sh@354 -- # echo 2 00:11:56.949 15:10:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:56.949 15:10:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:56.949 15:10:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:56.949 15:10:26 -- scripts/common.sh@367 -- # return 0 00:11:56.949 15:10:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.949 15:10:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:56.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.950 --rc genhtml_branch_coverage=1 00:11:56.950 --rc genhtml_function_coverage=1 00:11:56.950 --rc genhtml_legend=1 00:11:56.950 --rc geninfo_all_blocks=1 00:11:56.950 --rc geninfo_unexecuted_blocks=1 00:11:56.950 00:11:56.950 ' 00:11:56.950 15:10:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:56.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.950 --rc genhtml_branch_coverage=1 00:11:56.950 --rc genhtml_function_coverage=1 00:11:56.950 --rc genhtml_legend=1 00:11:56.950 --rc geninfo_all_blocks=1 00:11:56.950 --rc geninfo_unexecuted_blocks=1 00:11:56.950 00:11:56.950 ' 00:11:56.950 15:10:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:56.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.950 --rc genhtml_branch_coverage=1 00:11:56.950 --rc genhtml_function_coverage=1 00:11:56.950 --rc genhtml_legend=1 00:11:56.950 --rc geninfo_all_blocks=1 00:11:56.950 --rc geninfo_unexecuted_blocks=1 00:11:56.950 00:11:56.950 ' 00:11:56.950 15:10:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:56.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.950 --rc genhtml_branch_coverage=1 00:11:56.950 --rc genhtml_function_coverage=1 00:11:56.950 --rc genhtml_legend=1 00:11:56.950 --rc geninfo_all_blocks=1 00:11:56.950 --rc geninfo_unexecuted_blocks=1 00:11:56.950 00:11:56.950 ' 00:11:56.950 15:10:26 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.950 15:10:26 -- nvmf/common.sh@7 -- # uname -s 00:11:56.950 15:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.950 15:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.950 15:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.950 15:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.950 15:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.950 15:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.950 15:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.950 15:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.950 15:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.950 15:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.950 15:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:11:56.950 15:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:11:56.950 15:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.950 15:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.950 15:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.950 15:10:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.950 15:10:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.950 15:10:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.950 15:10:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.950 15:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.950 15:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.950 15:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.950 15:10:26 -- paths/export.sh@5 -- # export PATH 00:11:56.950 15:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.950 15:10:26 -- nvmf/common.sh@46 -- # : 0 00:11:56.950 15:10:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:56.950 15:10:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:56.950 15:10:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:56.950 15:10:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.950 15:10:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.950 15:10:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:56.950 15:10:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:56.950 15:10:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:56.950 15:10:26 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.950 15:10:26 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.950 15:10:26 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:11:56.950 15:10:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:56.950 15:10:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.950 15:10:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:56.950 15:10:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:56.950 15:10:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:56.950 15:10:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.950 15:10:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.950 15:10:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.950 15:10:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:56.950 15:10:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:56.950 15:10:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:56.950 15:10:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:56.950 15:10:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:56.950 15:10:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:56.950 15:10:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.950 15:10:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.950 15:10:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:56.950 15:10:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:56.950 15:10:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.950 15:10:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.950 15:10:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.950 15:10:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.950 15:10:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.950 15:10:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.950 15:10:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.950 15:10:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.950 15:10:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:56.950 15:10:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:56.950 Cannot find device "nvmf_tgt_br" 00:11:56.950 15:10:26 -- nvmf/common.sh@154 -- # true 00:11:56.950 15:10:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.950 Cannot find device "nvmf_tgt_br2" 00:11:56.950 15:10:26 -- nvmf/common.sh@155 -- # true 00:11:56.950 15:10:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:56.950 15:10:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:56.950 Cannot find device "nvmf_tgt_br" 00:11:56.950 15:10:26 -- nvmf/common.sh@157 -- # true 00:11:56.950 15:10:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:56.950 Cannot find device "nvmf_tgt_br2" 00:11:56.950 15:10:26 -- nvmf/common.sh@158 -- # true 00:11:56.950 15:10:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:57.209 15:10:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:57.209 15:10:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:57.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.209 15:10:26 -- nvmf/common.sh@161 -- # true 00:11:57.209 15:10:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:57.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.209 15:10:26 -- nvmf/common.sh@162 -- # true 00:11:57.209 15:10:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.209 15:10:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.209 15:10:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.209 15:10:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.209 15:10:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.209 15:10:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.209 15:10:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.209 15:10:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:57.209 15:10:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:57.209 15:10:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:57.209 15:10:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:57.209 15:10:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:57.209 15:10:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:57.209 15:10:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.209 15:10:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.209 15:10:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.209 15:10:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:57.209 15:10:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:57.209 15:10:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.209 15:10:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.209 15:10:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.209 15:10:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.209 15:10:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.209 15:10:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:57.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:57.209 00:11:57.209 --- 10.0.0.2 ping statistics --- 00:11:57.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.210 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:57.210 15:10:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:57.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:11:57.210 00:11:57.210 --- 10.0.0.3 ping statistics --- 00:11:57.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.210 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:57.210 15:10:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:57.210 00:11:57.210 --- 10.0.0.1 ping statistics --- 00:11:57.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.210 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:57.210 15:10:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.210 15:10:26 -- nvmf/common.sh@421 -- # return 0 00:11:57.210 15:10:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:57.210 15:10:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.210 15:10:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:57.210 15:10:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:57.210 15:10:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.210 15:10:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:57.210 15:10:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:57.210 15:10:26 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:11:57.210 15:10:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:57.210 15:10:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.210 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:11:57.210 15:10:26 -- nvmf/common.sh@469 -- # nvmfpid=67519 00:11:57.210 15:10:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.210 15:10:26 -- nvmf/common.sh@470 -- # waitforlisten 67519 00:11:57.210 15:10:26 -- common/autotest_common.sh@829 -- # '[' -z 67519 ']' 00:11:57.210 15:10:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.210 15:10:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.210 15:10:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.210 15:10:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.210 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:11:57.468 [2024-11-06 15:10:26.515303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:57.468 [2024-11-06 15:10:26.515887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.468 [2024-11-06 15:10:26.650648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.468 [2024-11-06 15:10:26.702798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:57.468 [2024-11-06 15:10:26.703220] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.469 [2024-11-06 15:10:26.703359] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.469 [2024-11-06 15:10:26.703482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.469 [2024-11-06 15:10:26.703789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.469 [2024-11-06 15:10:26.703859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.469 [2024-11-06 15:10:26.703960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.469 [2024-11-06 15:10:26.704028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.407 15:10:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.407 15:10:27 -- common/autotest_common.sh@862 -- # return 0 00:11:58.407 15:10:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:58.407 15:10:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 15:10:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.407 15:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 Malloc0 00:11:58.407 15:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:11:58.407 15:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 Delay0 00:11:58.407 15:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.407 15:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 [2024-11-06 15:10:27.579946] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.407 15:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.407 15:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 15:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.407 15:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 15:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.407 15:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.407 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.407 [2024-11-06 15:10:27.608155] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.407 15:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.407 15:10:27 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.666 15:10:27 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.666 15:10:27 -- common/autotest_common.sh@1187 -- # local i=0 00:11:58.666 15:10:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.666 15:10:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:58.666 15:10:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:00.570 15:10:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:00.570 15:10:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:00.570 15:10:29 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.570 15:10:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:00.570 15:10:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.570 15:10:29 -- common/autotest_common.sh@1197 -- # return 0 00:12:00.570 15:10:29 -- target/initiator_timeout.sh@35 -- # fio_pid=67583 00:12:00.570 15:10:29 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:12:00.570 15:10:29 -- target/initiator_timeout.sh@37 -- # sleep 3 00:12:00.570 [global] 00:12:00.570 thread=1 00:12:00.570 invalidate=1 00:12:00.570 rw=write 00:12:00.570 time_based=1 00:12:00.570 runtime=60 00:12:00.570 ioengine=libaio 00:12:00.570 direct=1 00:12:00.570 bs=4096 00:12:00.570 iodepth=1 00:12:00.570 norandommap=0 00:12:00.570 numjobs=1 00:12:00.570 00:12:00.570 verify_dump=1 00:12:00.570 verify_backlog=512 00:12:00.570 verify_state_save=0 00:12:00.570 do_verify=1 00:12:00.570 verify=crc32c-intel 00:12:00.570 [job0] 00:12:00.570 filename=/dev/nvme0n1 00:12:00.570 Could not set queue depth (nvme0n1) 00:12:00.827 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.827 fio-3.35 00:12:00.827 Starting 1 thread 00:12:04.112 15:10:32 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:12:04.112 15:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.112 15:10:32 -- common/autotest_common.sh@10 -- # set +x 00:12:04.112 true 00:12:04.112 15:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.112 15:10:32 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:12:04.112 15:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.112 15:10:32 -- common/autotest_common.sh@10 -- # set +x 00:12:04.112 true 00:12:04.112 15:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.112 15:10:32 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:12:04.112 15:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.112 15:10:32 -- common/autotest_common.sh@10 -- # set +x 00:12:04.112 true 00:12:04.112 15:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.112 15:10:32 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:12:04.112 15:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.112 15:10:32 -- common/autotest_common.sh@10 -- # set +x 00:12:04.112 true 00:12:04.112 15:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.112 15:10:32 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:06.643 15:10:35 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:06.643 15:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.643 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:12:06.643 true 00:12:06.643 15:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.643 15:10:35 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:06.643 15:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.643 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:12:06.643 true 00:12:06.643 15:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.643 15:10:35 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:06.643 15:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.643 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:12:06.643 true 00:12:06.643 15:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.643 15:10:35 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:06.643 15:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.643 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:12:06.643 true 00:12:06.643 15:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.643 15:10:35 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:06.643 15:10:35 -- target/initiator_timeout.sh@54 -- # wait 67583 00:13:02.900 00:13:02.900 job0: (groupid=0, jobs=1): err= 0: pid=67604: Wed Nov 6 15:11:30 2024 00:13:02.900 read: IOPS=759, BW=3039KiB/s (3112kB/s)(178MiB/60000msec) 00:13:02.900 slat (usec): min=10, max=12835, avg=14.64, stdev=70.17 00:13:02.900 clat (usec): min=158, max=40857k, avg=1108.68, stdev=191354.25 00:13:02.900 lat (usec): min=170, max=40857k, avg=1123.32, stdev=191354.26 00:13:02.900 clat percentiles (usec): 00:13:02.900 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:13:02.900 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:13:02.900 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 00:13:02.900 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 330], 99.95th=[ 562], 00:13:02.900 | 99.99th=[ 2147] 00:13:02.900 write: IOPS=768, BW=3072KiB/s (3146kB/s)(180MiB/60000msec); 0 zone resets 00:13:02.900 slat (usec): min=13, max=548, avg=21.37, stdev= 6.53 00:13:02.900 clat (usec): min=117, max=1957, avg=165.87, stdev=25.51 00:13:02.900 lat (usec): min=136, max=1976, avg=187.24, stdev=26.54 00:13:02.900 clat percentiles (usec): 00:13:02.900 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 149], 00:13:02.900 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:13:02.900 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 204], 00:13:02.900 | 99.00th=[ 223], 99.50th=[ 235], 99.90th=[ 258], 99.95th=[ 277], 00:13:02.900 | 99.99th=[ 799] 00:13:02.900 bw ( KiB/s): min= 4096, max=11536, per=100.00%, avg=9484.95, stdev=1422.95, samples=38 00:13:02.900 iops : min= 1024, max= 2884, avg=2371.24, stdev=355.74, samples=38 00:13:02.900 lat (usec) : 250=96.81%, 500=3.15%, 750=0.02%, 1000=0.01% 00:13:02.900 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:13:02.900 cpu : usr=0.61%, sys=2.12%, ctx=91682, majf=0, minf=5 00:13:02.900 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.900 issued rwts: total=45588,46080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.900 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.900 00:13:02.900 Run status group 0 (all jobs): 00:13:02.901 READ: bw=3039KiB/s (3112kB/s), 3039KiB/s-3039KiB/s (3112kB/s-3112kB/s), io=178MiB (187MB), run=60000-60000msec 00:13:02.901 WRITE: bw=3072KiB/s (3146kB/s), 3072KiB/s-3072KiB/s (3146kB/s-3146kB/s), io=180MiB (189MB), run=60000-60000msec 00:13:02.901 00:13:02.901 Disk stats (read/write): 00:13:02.901 nvme0n1: ios=45811/45574, merge=0/0, ticks=9930/7954, in_queue=17884, util=99.58% 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.901 15:11:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:02.901 15:11:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.901 15:11:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:02.901 15:11:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:02.901 15:11:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.901 15:11:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:13:02.901 nvmf hotplug test: fio successful as expected 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.901 15:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.901 15:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:02.901 15:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:13:02.901 15:11:30 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:13:02.901 15:11:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:02.901 15:11:30 -- nvmf/common.sh@116 -- # sync 00:13:02.901 15:11:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:02.901 15:11:30 -- nvmf/common.sh@119 -- # set +e 00:13:02.901 15:11:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:02.901 15:11:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:02.901 rmmod nvme_tcp 00:13:02.901 rmmod nvme_fabrics 00:13:02.901 rmmod nvme_keyring 00:13:02.901 15:11:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:02.901 15:11:30 -- nvmf/common.sh@123 -- # set -e 00:13:02.901 15:11:30 -- nvmf/common.sh@124 -- # return 0 00:13:02.901 15:11:30 -- nvmf/common.sh@477 -- # '[' -n 67519 ']' 00:13:02.901 15:11:30 -- nvmf/common.sh@478 -- # killprocess 67519 00:13:02.901 15:11:30 -- common/autotest_common.sh@936 -- # '[' -z 67519 ']' 00:13:02.901 15:11:30 -- common/autotest_common.sh@940 -- # kill -0 67519 00:13:02.901 15:11:30 -- common/autotest_common.sh@941 -- # uname 00:13:02.901 15:11:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:02.901 15:11:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67519 00:13:02.901 killing process with pid 67519 00:13:02.901 15:11:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:02.901 15:11:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:02.901 15:11:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67519' 00:13:02.901 15:11:30 -- common/autotest_common.sh@955 -- # kill 67519 00:13:02.901 15:11:30 -- common/autotest_common.sh@960 -- # wait 67519 00:13:02.901 15:11:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:02.901 15:11:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:02.901 15:11:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:02.901 15:11:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.901 15:11:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:02.901 15:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.901 15:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.901 15:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.901 15:11:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:02.901 00:13:02.901 real 1m4.601s 00:13:02.901 user 3m53.841s 00:13:02.901 sys 0m21.424s 00:13:02.901 ************************************ 00:13:02.901 END TEST nvmf_initiator_timeout 00:13:02.901 ************************************ 00:13:02.901 15:11:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.901 15:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:02.901 15:11:30 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:13:02.901 15:11:30 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:02.901 15:11:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.901 15:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:02.901 15:11:30 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:02.901 15:11:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.901 15:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:02.901 15:11:30 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:02.901 15:11:30 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:02.901 15:11:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.901 15:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.901 15:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:02.901 ************************************ 00:13:02.901 START TEST nvmf_identify 00:13:02.901 ************************************ 00:13:02.901 15:11:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:02.901 * Looking for test storage... 00:13:02.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:02.901 15:11:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:02.901 15:11:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:02.901 15:11:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:02.901 15:11:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:02.901 15:11:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:02.901 15:11:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:02.901 15:11:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:02.901 15:11:30 -- scripts/common.sh@335 -- # IFS=.-: 00:13:02.901 15:11:30 -- scripts/common.sh@335 -- # read -ra ver1 00:13:02.901 15:11:30 -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.901 15:11:30 -- scripts/common.sh@336 -- # read -ra ver2 00:13:02.901 15:11:30 -- scripts/common.sh@337 -- # local 'op=<' 00:13:02.901 15:11:30 -- scripts/common.sh@339 -- # ver1_l=2 00:13:02.901 15:11:30 -- scripts/common.sh@340 -- # ver2_l=1 00:13:02.901 15:11:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:02.901 15:11:30 -- scripts/common.sh@343 -- # case "$op" in 00:13:02.901 15:11:30 -- scripts/common.sh@344 -- # : 1 00:13:02.901 15:11:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:02.901 15:11:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.901 15:11:30 -- scripts/common.sh@364 -- # decimal 1 00:13:02.901 15:11:30 -- scripts/common.sh@352 -- # local d=1 00:13:02.901 15:11:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.901 15:11:30 -- scripts/common.sh@354 -- # echo 1 00:13:02.901 15:11:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:02.901 15:11:30 -- scripts/common.sh@365 -- # decimal 2 00:13:02.901 15:11:30 -- scripts/common.sh@352 -- # local d=2 00:13:02.901 15:11:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.901 15:11:30 -- scripts/common.sh@354 -- # echo 2 00:13:02.901 15:11:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:02.901 15:11:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:02.901 15:11:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:02.901 15:11:30 -- scripts/common.sh@367 -- # return 0 00:13:02.901 15:11:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.901 15:11:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.901 --rc genhtml_branch_coverage=1 00:13:02.901 --rc genhtml_function_coverage=1 00:13:02.901 --rc genhtml_legend=1 00:13:02.901 --rc geninfo_all_blocks=1 00:13:02.901 --rc geninfo_unexecuted_blocks=1 00:13:02.901 00:13:02.901 ' 00:13:02.901 15:11:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.901 --rc genhtml_branch_coverage=1 00:13:02.901 --rc genhtml_function_coverage=1 00:13:02.901 --rc genhtml_legend=1 00:13:02.901 --rc geninfo_all_blocks=1 00:13:02.901 --rc geninfo_unexecuted_blocks=1 00:13:02.901 00:13:02.901 ' 00:13:02.901 15:11:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.901 --rc genhtml_branch_coverage=1 00:13:02.901 --rc genhtml_function_coverage=1 00:13:02.901 --rc genhtml_legend=1 00:13:02.901 --rc geninfo_all_blocks=1 00:13:02.901 --rc geninfo_unexecuted_blocks=1 00:13:02.901 00:13:02.901 ' 00:13:02.901 15:11:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.901 --rc genhtml_branch_coverage=1 00:13:02.901 --rc genhtml_function_coverage=1 00:13:02.901 --rc genhtml_legend=1 00:13:02.901 --rc geninfo_all_blocks=1 00:13:02.901 --rc geninfo_unexecuted_blocks=1 00:13:02.901 00:13:02.901 ' 00:13:02.901 15:11:30 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.901 15:11:30 -- nvmf/common.sh@7 -- # uname -s 00:13:02.901 15:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.901 15:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.901 15:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.901 15:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.901 15:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.901 15:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.901 15:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.901 15:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.901 15:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.901 15:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.901 15:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:13:02.901 15:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:13:02.901 15:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.901 15:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.901 15:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.902 15:11:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.902 15:11:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.902 15:11:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.902 15:11:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.902 15:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.902 15:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.902 15:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.902 15:11:30 -- paths/export.sh@5 -- # export PATH 00:13:02.902 15:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.902 15:11:30 -- nvmf/common.sh@46 -- # : 0 00:13:02.902 15:11:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:02.902 15:11:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:02.902 15:11:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:02.902 15:11:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.902 15:11:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.902 15:11:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:02.902 15:11:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:02.902 15:11:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:02.902 15:11:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.902 15:11:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.902 15:11:30 -- host/identify.sh@14 -- # nvmftestinit 00:13:02.902 15:11:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:02.902 15:11:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.902 15:11:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:02.902 15:11:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:02.902 15:11:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:02.902 15:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.902 15:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.902 15:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.902 15:11:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:02.902 15:11:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:02.902 15:11:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:02.902 15:11:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:02.902 15:11:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:02.902 15:11:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:02.902 15:11:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.902 15:11:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.902 15:11:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:02.902 15:11:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:02.902 15:11:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.902 15:11:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.902 15:11:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.902 15:11:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.902 15:11:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.902 15:11:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.902 15:11:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.902 15:11:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.902 15:11:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:02.902 15:11:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:02.902 Cannot find device "nvmf_tgt_br" 00:13:02.902 15:11:30 -- nvmf/common.sh@154 -- # true 00:13:02.902 15:11:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.902 Cannot find device "nvmf_tgt_br2" 00:13:02.902 15:11:30 -- nvmf/common.sh@155 -- # true 00:13:02.902 15:11:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:02.902 15:11:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:02.902 Cannot find device "nvmf_tgt_br" 00:13:02.902 15:11:30 -- nvmf/common.sh@157 -- # true 00:13:02.902 15:11:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:02.902 Cannot find device "nvmf_tgt_br2" 00:13:02.902 15:11:30 -- nvmf/common.sh@158 -- # true 00:13:02.902 15:11:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:02.902 15:11:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:02.902 15:11:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.902 15:11:30 -- nvmf/common.sh@161 -- # true 00:13:02.902 15:11:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.902 15:11:30 -- nvmf/common.sh@162 -- # true 00:13:02.902 15:11:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.902 15:11:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.902 15:11:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.902 15:11:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.902 15:11:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.902 15:11:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.902 15:11:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.902 15:11:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:02.902 15:11:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:02.902 15:11:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:02.902 15:11:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:02.902 15:11:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:02.902 15:11:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:02.902 15:11:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.902 15:11:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.902 15:11:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.902 15:11:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:02.902 15:11:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:02.902 15:11:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.902 15:11:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.902 15:11:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.902 15:11:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.902 15:11:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.902 15:11:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:02.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:02.902 00:13:02.902 --- 10.0.0.2 ping statistics --- 00:13:02.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.902 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:02.902 15:11:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:02.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:13:02.902 00:13:02.902 --- 10.0.0.3 ping statistics --- 00:13:02.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.902 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:02.902 15:11:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:02.902 00:13:02.902 --- 10.0.0.1 ping statistics --- 00:13:02.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.902 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:02.902 15:11:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.902 15:11:31 -- nvmf/common.sh@421 -- # return 0 00:13:02.902 15:11:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:02.902 15:11:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.902 15:11:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:02.902 15:11:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:02.902 15:11:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.902 15:11:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:02.902 15:11:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:02.902 15:11:31 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:02.902 15:11:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.902 15:11:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.902 15:11:31 -- host/identify.sh@19 -- # nvmfpid=68457 00:13:02.902 15:11:31 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.902 15:11:31 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:02.902 15:11:31 -- host/identify.sh@23 -- # waitforlisten 68457 00:13:02.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.902 15:11:31 -- common/autotest_common.sh@829 -- # '[' -z 68457 ']' 00:13:02.902 15:11:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.902 15:11:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.902 15:11:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.902 15:11:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.902 15:11:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.903 [2024-11-06 15:11:31.235834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:02.903 [2024-11-06 15:11:31.236090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.903 [2024-11-06 15:11:31.370155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.903 [2024-11-06 15:11:31.425806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.903 [2024-11-06 15:11:31.426189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.903 [2024-11-06 15:11:31.426241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.903 [2024-11-06 15:11:31.426492] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.903 [2024-11-06 15:11:31.426698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.903 [2024-11-06 15:11:31.426828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.903 [2024-11-06 15:11:31.426960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.903 [2024-11-06 15:11:31.426995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.903 15:11:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.903 15:11:32 -- common/autotest_common.sh@862 -- # return 0 00:13:02.903 15:11:32 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.903 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.903 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.903 [2024-11-06 15:11:32.150540] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.903 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.903 15:11:32 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:02.903 15:11:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.903 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 15:11:32 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.179 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.179 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 Malloc0 00:13:03.179 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.179 15:11:32 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.179 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.179 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.179 15:11:32 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:03.179 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.179 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.179 15:11:32 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.179 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.179 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 [2024-11-06 15:11:32.244023] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.179 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.179 15:11:32 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.179 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.179 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.179 15:11:32 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:03.179 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.179 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.179 [2024-11-06 15:11:32.259778] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:03.179 [ 00:13:03.179 { 00:13:03.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.179 "subtype": "Discovery", 00:13:03.179 "listen_addresses": [ 00:13:03.179 { 00:13:03.179 "transport": "TCP", 00:13:03.179 "trtype": "TCP", 00:13:03.179 "adrfam": "IPv4", 00:13:03.179 "traddr": "10.0.0.2", 00:13:03.179 "trsvcid": "4420" 00:13:03.179 } 00:13:03.179 ], 00:13:03.179 "allow_any_host": true, 00:13:03.179 "hosts": [] 00:13:03.179 }, 00:13:03.179 { 00:13:03.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.179 "subtype": "NVMe", 00:13:03.179 "listen_addresses": [ 00:13:03.179 { 00:13:03.179 "transport": "TCP", 00:13:03.179 "trtype": "TCP", 00:13:03.179 "adrfam": "IPv4", 00:13:03.179 "traddr": "10.0.0.2", 00:13:03.179 "trsvcid": "4420" 00:13:03.179 } 00:13:03.179 ], 00:13:03.179 "allow_any_host": true, 00:13:03.179 "hosts": [], 00:13:03.179 "serial_number": "SPDK00000000000001", 00:13:03.179 "model_number": "SPDK bdev Controller", 00:13:03.179 "max_namespaces": 32, 00:13:03.179 "min_cntlid": 1, 00:13:03.179 "max_cntlid": 65519, 00:13:03.179 "namespaces": [ 00:13:03.179 { 00:13:03.179 "nsid": 1, 00:13:03.179 "bdev_name": "Malloc0", 00:13:03.179 "name": "Malloc0", 00:13:03.179 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:03.179 "eui64": "ABCDEF0123456789", 00:13:03.179 "uuid": "2d0d4b72-bf48-4dec-a182-ae90a2394b04" 00:13:03.179 } 00:13:03.179 ] 00:13:03.179 } 00:13:03.179 ] 00:13:03.179 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.179 15:11:32 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:03.179 [2024-11-06 15:11:32.300603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:03.179 [2024-11-06 15:11:32.300699] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68491 ] 00:13:03.179 [2024-11-06 15:11:32.442956] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:03.179 [2024-11-06 15:11:32.443041] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:03.179 [2024-11-06 15:11:32.443054] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:03.179 [2024-11-06 15:11:32.443067] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:03.465 [2024-11-06 15:11:32.443085] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:03.465 [2024-11-06 15:11:32.443308] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:03.465 [2024-11-06 15:11:32.443387] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x733d30 0 00:13:03.465 [2024-11-06 15:11:32.456740] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:03.465 [2024-11-06 15:11:32.456770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:03.465 [2024-11-06 15:11:32.456777] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:03.465 [2024-11-06 15:11:32.456781] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:03.465 [2024-11-06 15:11:32.456827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.456835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.456841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.465 [2024-11-06 15:11:32.456863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:03.465 [2024-11-06 15:11:32.456905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.465 [2024-11-06 15:11:32.464714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.465 [2024-11-06 15:11:32.464748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.465 [2024-11-06 15:11:32.464759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.464768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.465 [2024-11-06 15:11:32.464796] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:03.465 [2024-11-06 15:11:32.464812] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:03.465 [2024-11-06 15:11:32.464820] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:03.465 [2024-11-06 15:11:32.464841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.464850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.464858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.465 [2024-11-06 15:11:32.464871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.465 [2024-11-06 15:11:32.464910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.465 [2024-11-06 15:11:32.464968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.465 [2024-11-06 15:11:32.464978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.465 [2024-11-06 15:11:32.464985] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.464992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.465 [2024-11-06 15:11:32.465002] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:03.465 [2024-11-06 15:11:32.465016] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:03.465 [2024-11-06 15:11:32.465026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.465 [2024-11-06 15:11:32.465053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.465 [2024-11-06 15:11:32.465085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.465 [2024-11-06 15:11:32.465136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.465 [2024-11-06 15:11:32.465150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.465 [2024-11-06 15:11:32.465156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.465 [2024-11-06 15:11:32.465175] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:03.465 [2024-11-06 15:11:32.465191] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:03.465 [2024-11-06 15:11:32.465204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.465 [2024-11-06 15:11:32.465227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.465 [2024-11-06 15:11:32.465258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.465 [2024-11-06 15:11:32.465311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.465 [2024-11-06 15:11:32.465324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.465 [2024-11-06 15:11:32.465331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.465 [2024-11-06 15:11:32.465348] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:03.465 [2024-11-06 15:11:32.465364] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.465 [2024-11-06 15:11:32.465386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.465 [2024-11-06 15:11:32.465417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.465 [2024-11-06 15:11:32.465465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.465 [2024-11-06 15:11:32.465479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.465 [2024-11-06 15:11:32.465487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.465 [2024-11-06 15:11:32.465494] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.466 [2024-11-06 15:11:32.465503] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:03.466 [2024-11-06 15:11:32.465512] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:03.466 [2024-11-06 15:11:32.465525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:03.466 [2024-11-06 15:11:32.465635] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:03.466 [2024-11-06 15:11:32.465646] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:03.466 [2024-11-06 15:11:32.465673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.465684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.465692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.465704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.466 [2024-11-06 15:11:32.465734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.466 [2024-11-06 15:11:32.465798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.466 [2024-11-06 15:11:32.465812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.466 [2024-11-06 15:11:32.465817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.465824] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.466 [2024-11-06 15:11:32.465833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:03.466 [2024-11-06 15:11:32.465850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.465859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.465874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.465886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.466 [2024-11-06 15:11:32.465917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.466 [2024-11-06 15:11:32.465968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.466 [2024-11-06 15:11:32.465980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.466 [2024-11-06 15:11:32.465987] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.465994] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.466 [2024-11-06 15:11:32.466003] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:03.466 [2024-11-06 15:11:32.466011] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:03.466 [2024-11-06 15:11:32.466020] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:03.466 [2024-11-06 15:11:32.466045] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:03.466 [2024-11-06 15:11:32.466062] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466068] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466075] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.466 [2024-11-06 15:11:32.466116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.466 [2024-11-06 15:11:32.466221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.466 [2024-11-06 15:11:32.466243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.466 [2024-11-06 15:11:32.466252] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466259] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x733d30): datao=0, datal=4096, cccid=0 00:13:03.466 [2024-11-06 15:11:32.466266] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x791f30) on tqpair(0x733d30): expected_datao=0, payload_size=4096 00:13:03.466 [2024-11-06 15:11:32.466280] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466288] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.466 [2024-11-06 15:11:32.466312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.466 [2024-11-06 15:11:32.466316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.466 [2024-11-06 15:11:32.466338] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:03.466 [2024-11-06 15:11:32.466348] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:03.466 [2024-11-06 15:11:32.466356] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:03.466 [2024-11-06 15:11:32.466364] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:03.466 [2024-11-06 15:11:32.466373] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:03.466 [2024-11-06 15:11:32.466382] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:03.466 [2024-11-06 15:11:32.466404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:03.466 [2024-11-06 15:11:32.466419] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:03.466 [2024-11-06 15:11:32.466477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.466 [2024-11-06 15:11:32.466537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.466 [2024-11-06 15:11:32.466549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.466 [2024-11-06 15:11:32.466553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x791f30) on tqpair=0x733d30 00:13:03.466 [2024-11-06 15:11:32.466569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.466 [2024-11-06 15:11:32.466603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.466 [2024-11-06 15:11:32.466635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.466 [2024-11-06 15:11:32.466685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.466 [2024-11-06 15:11:32.466710] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:03.466 [2024-11-06 15:11:32.466733] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:03.466 [2024-11-06 15:11:32.466746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466754] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x733d30) 00:13:03.466 [2024-11-06 15:11:32.466766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.466 [2024-11-06 15:11:32.466800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x791f30, cid 0, qid 0 00:13:03.466 [2024-11-06 15:11:32.466809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x792090, cid 1, qid 0 00:13:03.466 [2024-11-06 15:11:32.466818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7921f0, cid 2, qid 0 00:13:03.466 [2024-11-06 15:11:32.466827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x792350, cid 3, qid 0 00:13:03.466 [2024-11-06 15:11:32.466835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7924b0, cid 4, qid 0 00:13:03.466 [2024-11-06 15:11:32.466918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.466 [2024-11-06 15:11:32.466934] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.466 [2024-11-06 15:11:32.466940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.466 [2024-11-06 15:11:32.466945] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7924b0) on tqpair=0x733d30 00:13:03.467 [2024-11-06 15:11:32.466951] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:03.467 [2024-11-06 15:11:32.466958] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:03.467 [2024-11-06 15:11:32.466976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.466986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.466993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x733d30) 00:13:03.467 [2024-11-06 15:11:32.467003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.467 [2024-11-06 15:11:32.467033] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7924b0, cid 4, qid 0 00:13:03.467 [2024-11-06 15:11:32.467101] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.467 [2024-11-06 15:11:32.467111] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.467 [2024-11-06 15:11:32.467130] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467138] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x733d30): datao=0, datal=4096, cccid=4 00:13:03.467 [2024-11-06 15:11:32.467147] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7924b0) on tqpair(0x733d30): expected_datao=0, payload_size=4096 00:13:03.467 [2024-11-06 15:11:32.467159] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467164] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.467 [2024-11-06 15:11:32.467188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.467 [2024-11-06 15:11:32.467195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7924b0) on tqpair=0x733d30 00:13:03.467 [2024-11-06 15:11:32.467222] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:03.467 [2024-11-06 15:11:32.467261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x733d30) 00:13:03.467 [2024-11-06 15:11:32.467289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.467 [2024-11-06 15:11:32.467303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x733d30) 00:13:03.467 [2024-11-06 15:11:32.467327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.467 [2024-11-06 15:11:32.467365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7924b0, cid 4, qid 0 00:13:03.467 [2024-11-06 15:11:32.467375] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x792610, cid 5, qid 0 00:13:03.467 [2024-11-06 15:11:32.467495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.467 [2024-11-06 15:11:32.467508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.467 [2024-11-06 15:11:32.467513] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467517] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x733d30): datao=0, datal=1024, cccid=4 00:13:03.467 [2024-11-06 15:11:32.467522] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7924b0) on tqpair(0x733d30): expected_datao=0, payload_size=1024 00:13:03.467 [2024-11-06 15:11:32.467533] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467541] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467551] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.467 [2024-11-06 15:11:32.467560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.467 [2024-11-06 15:11:32.467565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x792610) on tqpair=0x733d30 00:13:03.467 [2024-11-06 15:11:32.467598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.467 [2024-11-06 15:11:32.467611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.467 [2024-11-06 15:11:32.467615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467620] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7924b0) on tqpair=0x733d30 00:13:03.467 [2024-11-06 15:11:32.467644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x733d30) 00:13:03.467 [2024-11-06 15:11:32.467695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.467 [2024-11-06 15:11:32.467738] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7924b0, cid 4, qid 0 00:13:03.467 [2024-11-06 15:11:32.467818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.467 [2024-11-06 15:11:32.467833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.467 [2024-11-06 15:11:32.467840] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467847] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x733d30): datao=0, datal=3072, cccid=4 00:13:03.467 [2024-11-06 15:11:32.467855] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7924b0) on tqpair(0x733d30): expected_datao=0, payload_size=3072 00:13:03.467 [2024-11-06 15:11:32.467864] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467869] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.467 [2024-11-06 15:11:32.467894] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.467 [2024-11-06 15:11:32.467901] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7924b0) on tqpair=0x733d30 00:13:03.467 [2024-11-06 15:11:32.467925] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.467941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x733d30) 00:13:03.467 [2024-11-06 15:11:32.467953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.467 [2024-11-06 15:11:32.467993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7924b0, cid 4, qid 0 00:13:03.467 [2024-11-06 15:11:32.468062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.467 [2024-11-06 15:11:32.468074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.467 [2024-11-06 15:11:32.468081] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.468088] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x733d30): datao=0, datal=8, cccid=4 00:13:03.467 [2024-11-06 15:11:32.468096] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7924b0) on tqpair(0x733d30): expected_datao=0, payload_size=8 00:13:03.467 [2024-11-06 15:11:32.468108] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.468113] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.468140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.467 [2024-11-06 15:11:32.468153] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.467 [2024-11-06 15:11:32.468157] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.467 [2024-11-06 15:11:32.468161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7924b0) on tqpair=0x733d30 00:13:03.467 ===================================================== 00:13:03.467 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:03.467 ===================================================== 00:13:03.467 Controller Capabilities/Features 00:13:03.467 ================================ 00:13:03.467 Vendor ID: 0000 00:13:03.467 Subsystem Vendor ID: 0000 00:13:03.467 Serial Number: .................... 00:13:03.467 Model Number: ........................................ 00:13:03.467 Firmware Version: 24.01.1 00:13:03.467 Recommended Arb Burst: 0 00:13:03.467 IEEE OUI Identifier: 00 00 00 00:13:03.467 Multi-path I/O 00:13:03.467 May have multiple subsystem ports: No 00:13:03.467 May have multiple controllers: No 00:13:03.467 Associated with SR-IOV VF: No 00:13:03.467 Max Data Transfer Size: 131072 00:13:03.467 Max Number of Namespaces: 0 00:13:03.467 Max Number of I/O Queues: 1024 00:13:03.467 NVMe Specification Version (VS): 1.3 00:13:03.467 NVMe Specification Version (Identify): 1.3 00:13:03.467 Maximum Queue Entries: 128 00:13:03.467 Contiguous Queues Required: Yes 00:13:03.467 Arbitration Mechanisms Supported 00:13:03.467 Weighted Round Robin: Not Supported 00:13:03.467 Vendor Specific: Not Supported 00:13:03.467 Reset Timeout: 15000 ms 00:13:03.467 Doorbell Stride: 4 bytes 00:13:03.467 NVM Subsystem Reset: Not Supported 00:13:03.467 Command Sets Supported 00:13:03.467 NVM Command Set: Supported 00:13:03.467 Boot Partition: Not Supported 00:13:03.467 Memory Page Size Minimum: 4096 bytes 00:13:03.467 Memory Page Size Maximum: 4096 bytes 00:13:03.467 Persistent Memory Region: Not Supported 00:13:03.467 Optional Asynchronous Events Supported 00:13:03.467 Namespace Attribute Notices: Not Supported 00:13:03.467 Firmware Activation Notices: Not Supported 00:13:03.467 ANA Change Notices: Not Supported 00:13:03.467 PLE Aggregate Log Change Notices: Not Supported 00:13:03.467 LBA Status Info Alert Notices: Not Supported 00:13:03.467 EGE Aggregate Log Change Notices: Not Supported 00:13:03.467 Normal NVM Subsystem Shutdown event: Not Supported 00:13:03.467 Zone Descriptor Change Notices: Not Supported 00:13:03.467 Discovery Log Change Notices: Supported 00:13:03.467 Controller Attributes 00:13:03.467 128-bit Host Identifier: Not Supported 00:13:03.467 Non-Operational Permissive Mode: Not Supported 00:13:03.467 NVM Sets: Not Supported 00:13:03.467 Read Recovery Levels: Not Supported 00:13:03.467 Endurance Groups: Not Supported 00:13:03.467 Predictable Latency Mode: Not Supported 00:13:03.467 Traffic Based Keep ALive: Not Supported 00:13:03.468 Namespace Granularity: Not Supported 00:13:03.468 SQ Associations: Not Supported 00:13:03.468 UUID List: Not Supported 00:13:03.468 Multi-Domain Subsystem: Not Supported 00:13:03.468 Fixed Capacity Management: Not Supported 00:13:03.468 Variable Capacity Management: Not Supported 00:13:03.468 Delete Endurance Group: Not Supported 00:13:03.468 Delete NVM Set: Not Supported 00:13:03.468 Extended LBA Formats Supported: Not Supported 00:13:03.468 Flexible Data Placement Supported: Not Supported 00:13:03.468 00:13:03.468 Controller Memory Buffer Support 00:13:03.468 ================================ 00:13:03.468 Supported: No 00:13:03.468 00:13:03.468 Persistent Memory Region Support 00:13:03.468 ================================ 00:13:03.468 Supported: No 00:13:03.468 00:13:03.468 Admin Command Set Attributes 00:13:03.468 ============================ 00:13:03.468 Security Send/Receive: Not Supported 00:13:03.468 Format NVM: Not Supported 00:13:03.468 Firmware Activate/Download: Not Supported 00:13:03.468 Namespace Management: Not Supported 00:13:03.468 Device Self-Test: Not Supported 00:13:03.468 Directives: Not Supported 00:13:03.468 NVMe-MI: Not Supported 00:13:03.468 Virtualization Management: Not Supported 00:13:03.468 Doorbell Buffer Config: Not Supported 00:13:03.468 Get LBA Status Capability: Not Supported 00:13:03.468 Command & Feature Lockdown Capability: Not Supported 00:13:03.468 Abort Command Limit: 1 00:13:03.468 Async Event Request Limit: 4 00:13:03.468 Number of Firmware Slots: N/A 00:13:03.468 Firmware Slot 1 Read-Only: N/A 00:13:03.468 Firmware Activation Without Reset: N/A 00:13:03.468 Multiple Update Detection Support: N/A 00:13:03.468 Firmware Update Granularity: No Information Provided 00:13:03.468 Per-Namespace SMART Log: No 00:13:03.468 Asymmetric Namespace Access Log Page: Not Supported 00:13:03.468 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:03.468 Command Effects Log Page: Not Supported 00:13:03.468 Get Log Page Extended Data: Supported 00:13:03.468 Telemetry Log Pages: Not Supported 00:13:03.468 Persistent Event Log Pages: Not Supported 00:13:03.468 Supported Log Pages Log Page: May Support 00:13:03.468 Commands Supported & Effects Log Page: Not Supported 00:13:03.468 Feature Identifiers & Effects Log Page:May Support 00:13:03.468 NVMe-MI Commands & Effects Log Page: May Support 00:13:03.468 Data Area 4 for Telemetry Log: Not Supported 00:13:03.468 Error Log Page Entries Supported: 128 00:13:03.468 Keep Alive: Not Supported 00:13:03.468 00:13:03.468 NVM Command Set Attributes 00:13:03.468 ========================== 00:13:03.468 Submission Queue Entry Size 00:13:03.468 Max: 1 00:13:03.468 Min: 1 00:13:03.468 Completion Queue Entry Size 00:13:03.468 Max: 1 00:13:03.468 Min: 1 00:13:03.468 Number of Namespaces: 0 00:13:03.468 Compare Command: Not Supported 00:13:03.468 Write Uncorrectable Command: Not Supported 00:13:03.468 Dataset Management Command: Not Supported 00:13:03.468 Write Zeroes Command: Not Supported 00:13:03.468 Set Features Save Field: Not Supported 00:13:03.468 Reservations: Not Supported 00:13:03.468 Timestamp: Not Supported 00:13:03.468 Copy: Not Supported 00:13:03.468 Volatile Write Cache: Not Present 00:13:03.468 Atomic Write Unit (Normal): 1 00:13:03.468 Atomic Write Unit (PFail): 1 00:13:03.468 Atomic Compare & Write Unit: 1 00:13:03.468 Fused Compare & Write: Supported 00:13:03.468 Scatter-Gather List 00:13:03.468 SGL Command Set: Supported 00:13:03.468 SGL Keyed: Supported 00:13:03.468 SGL Bit Bucket Descriptor: Not Supported 00:13:03.468 SGL Metadata Pointer: Not Supported 00:13:03.468 Oversized SGL: Not Supported 00:13:03.468 SGL Metadata Address: Not Supported 00:13:03.468 SGL Offset: Supported 00:13:03.468 Transport SGL Data Block: Not Supported 00:13:03.468 Replay Protected Memory Block: Not Supported 00:13:03.468 00:13:03.468 Firmware Slot Information 00:13:03.468 ========================= 00:13:03.468 Active slot: 0 00:13:03.468 00:13:03.468 00:13:03.468 Error Log 00:13:03.468 ========= 00:13:03.468 00:13:03.468 Active Namespaces 00:13:03.468 ================= 00:13:03.468 Discovery Log Page 00:13:03.468 ================== 00:13:03.468 Generation Counter: 2 00:13:03.468 Number of Records: 2 00:13:03.468 Record Format: 0 00:13:03.468 00:13:03.468 Discovery Log Entry 0 00:13:03.468 ---------------------- 00:13:03.468 Transport Type: 3 (TCP) 00:13:03.468 Address Family: 1 (IPv4) 00:13:03.468 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:03.468 Entry Flags: 00:13:03.468 Duplicate Returned Information: 1 00:13:03.468 Explicit Persistent Connection Support for Discovery: 1 00:13:03.468 Transport Requirements: 00:13:03.468 Secure Channel: Not Required 00:13:03.468 Port ID: 0 (0x0000) 00:13:03.468 Controller ID: 65535 (0xffff) 00:13:03.468 Admin Max SQ Size: 128 00:13:03.468 Transport Service Identifier: 4420 00:13:03.468 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:03.468 Transport Address: 10.0.0.2 00:13:03.468 Discovery Log Entry 1 00:13:03.468 ---------------------- 00:13:03.468 Transport Type: 3 (TCP) 00:13:03.468 Address Family: 1 (IPv4) 00:13:03.468 Subsystem Type: 2 (NVM Subsystem) 00:13:03.468 Entry Flags: 00:13:03.468 Duplicate Returned Information: 0 00:13:03.468 Explicit Persistent Connection Support for Discovery: 0 00:13:03.468 Transport Requirements: 00:13:03.468 Secure Channel: Not Required 00:13:03.468 Port ID: 0 (0x0000) 00:13:03.468 Controller ID: 65535 (0xffff) 00:13:03.468 Admin Max SQ Size: 128 00:13:03.468 Transport Service Identifier: 4420 00:13:03.468 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:03.468 Transport Address: 10.0.0.2 [2024-11-06 15:11:32.468300] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:03.468 [2024-11-06 15:11:32.468327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.468 [2024-11-06 15:11:32.468340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.468 [2024-11-06 15:11:32.468351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.468 [2024-11-06 15:11:32.468359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.468 [2024-11-06 15:11:32.468370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.468377] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.468384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x733d30) 00:13:03.468 [2024-11-06 15:11:32.468397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.468 [2024-11-06 15:11:32.468431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x792350, cid 3, qid 0 00:13:03.468 [2024-11-06 15:11:32.468483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.468 [2024-11-06 15:11:32.468497] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.468 [2024-11-06 15:11:32.468503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.468507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x792350) on tqpair=0x733d30 00:13:03.468 [2024-11-06 15:11:32.468516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.468523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.468530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x733d30) 00:13:03.468 [2024-11-06 15:11:32.468543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.468 [2024-11-06 15:11:32.468575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x792350, cid 3, qid 0 00:13:03.468 [2024-11-06 15:11:32.468643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.468 [2024-11-06 15:11:32.472683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.468 [2024-11-06 15:11:32.472705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.472711] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x792350) on tqpair=0x733d30 00:13:03.468 [2024-11-06 15:11:32.472734] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:03.468 [2024-11-06 15:11:32.472740] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:03.468 [2024-11-06 15:11:32.472756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.472762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.472766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x733d30) 00:13:03.468 [2024-11-06 15:11:32.472776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.468 [2024-11-06 15:11:32.472805] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x792350, cid 3, qid 0 00:13:03.468 [2024-11-06 15:11:32.472856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.468 [2024-11-06 15:11:32.472864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.468 [2024-11-06 15:11:32.472868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.468 [2024-11-06 15:11:32.472872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x792350) on tqpair=0x733d30 00:13:03.469 [2024-11-06 15:11:32.472881] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:13:03.469 00:13:03.469 15:11:32 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:03.469 [2024-11-06 15:11:32.513772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:03.469 [2024-11-06 15:11:32.513812] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68494 ] 00:13:03.469 [2024-11-06 15:11:32.650354] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:03.469 [2024-11-06 15:11:32.650432] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:03.469 [2024-11-06 15:11:32.650440] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:03.469 [2024-11-06 15:11:32.650452] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:03.469 [2024-11-06 15:11:32.650465] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:03.469 [2024-11-06 15:11:32.650607] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:03.469 [2024-11-06 15:11:32.650677] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13cfd30 0 00:13:03.469 [2024-11-06 15:11:32.663738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:03.469 [2024-11-06 15:11:32.663763] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:03.469 [2024-11-06 15:11:32.663786] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:03.469 [2024-11-06 15:11:32.663790] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:03.469 [2024-11-06 15:11:32.663844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.663852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.663857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.663871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:03.469 [2024-11-06 15:11:32.663901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.671737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.671759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.671781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.671787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.671799] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:03.469 [2024-11-06 15:11:32.671807] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:03.469 [2024-11-06 15:11:32.671813] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:03.469 [2024-11-06 15:11:32.671829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.671834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.671838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.671848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.469 [2024-11-06 15:11:32.671875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.671954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.671962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.671966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.671970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.671977] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:03.469 [2024-11-06 15:11:32.671986] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:03.469 [2024-11-06 15:11:32.671994] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.671998] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672002] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.672011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.469 [2024-11-06 15:11:32.672030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.672083] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.672095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.672100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.672112] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:03.469 [2024-11-06 15:11:32.672122] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:03.469 [2024-11-06 15:11:32.672130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.672147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.469 [2024-11-06 15:11:32.672165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.672223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.672230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.672234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.672246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:03.469 [2024-11-06 15:11:32.672257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.672274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.469 [2024-11-06 15:11:32.672291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.672344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.672356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.672360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.672371] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:03.469 [2024-11-06 15:11:32.672377] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:03.469 [2024-11-06 15:11:32.672386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:03.469 [2024-11-06 15:11:32.672493] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:03.469 [2024-11-06 15:11:32.672497] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:03.469 [2024-11-06 15:11:32.672507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672511] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.672523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.469 [2024-11-06 15:11:32.672542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.672590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.672597] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.672602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672606] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.672613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:03.469 [2024-11-06 15:11:32.672624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.469 [2024-11-06 15:11:32.672641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.469 [2024-11-06 15:11:32.672671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.469 [2024-11-06 15:11:32.672732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.469 [2024-11-06 15:11:32.672740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.469 [2024-11-06 15:11:32.672744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.469 [2024-11-06 15:11:32.672748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.469 [2024-11-06 15:11:32.672754] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:03.470 [2024-11-06 15:11:32.672760] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.672769] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:03.470 [2024-11-06 15:11:32.672785] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.672795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.672800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.672804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.672813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.470 [2024-11-06 15:11:32.672834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.470 [2024-11-06 15:11:32.672938] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.470 [2024-11-06 15:11:32.672951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.470 [2024-11-06 15:11:32.672956] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.672960] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=4096, cccid=0 00:13:03.470 [2024-11-06 15:11:32.672966] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142df30) on tqpair(0x13cfd30): expected_datao=0, payload_size=4096 00:13:03.470 [2024-11-06 15:11:32.672975] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.672981] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.672990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.470 [2024-11-06 15:11:32.672997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.470 [2024-11-06 15:11:32.673001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.470 [2024-11-06 15:11:32.673016] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:03.470 [2024-11-06 15:11:32.673022] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:03.470 [2024-11-06 15:11:32.673027] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:03.470 [2024-11-06 15:11:32.673032] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:03.470 [2024-11-06 15:11:32.673037] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:03.470 [2024-11-06 15:11:32.673043] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:03.470 [2024-11-06 15:11:32.673103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.470 [2024-11-06 15:11:32.673157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.470 [2024-11-06 15:11:32.673165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.470 [2024-11-06 15:11:32.673169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142df30) on tqpair=0x13cfd30 00:13:03.470 [2024-11-06 15:11:32.673182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.470 [2024-11-06 15:11:32.673205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.470 [2024-11-06 15:11:32.673226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673234] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.470 [2024-11-06 15:11:32.673247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.470 [2024-11-06 15:11:32.673267] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673281] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.470 [2024-11-06 15:11:32.673325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142df30, cid 0, qid 0 00:13:03.470 [2024-11-06 15:11:32.673332] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e090, cid 1, qid 0 00:13:03.470 [2024-11-06 15:11:32.673338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e1f0, cid 2, qid 0 00:13:03.470 [2024-11-06 15:11:32.673343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.470 [2024-11-06 15:11:32.673348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.470 [2024-11-06 15:11:32.673438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.470 [2024-11-06 15:11:32.673445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.470 [2024-11-06 15:11:32.673449] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673453] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.470 [2024-11-06 15:11:32.673460] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:03.470 [2024-11-06 15:11:32.673466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673486] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:03.470 [2024-11-06 15:11:32.673529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.470 [2024-11-06 15:11:32.673586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.470 [2024-11-06 15:11:32.673593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.470 [2024-11-06 15:11:32.673597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673601] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.470 [2024-11-06 15:11:32.673678] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673693] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:03.470 [2024-11-06 15:11:32.673702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.470 [2024-11-06 15:11:32.673710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.470 [2024-11-06 15:11:32.673719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.470 [2024-11-06 15:11:32.673739] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.470 [2024-11-06 15:11:32.673807] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.471 [2024-11-06 15:11:32.673814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.471 [2024-11-06 15:11:32.673818] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.673822] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=4096, cccid=4 00:13:03.471 [2024-11-06 15:11:32.673827] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e4b0) on tqpair(0x13cfd30): expected_datao=0, payload_size=4096 00:13:03.471 [2024-11-06 15:11:32.673836] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.673840] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.673849] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.673856] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.673860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.673864] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.673881] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:03.471 [2024-11-06 15:11:32.673892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.673903] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.673911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.673916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.673920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.673928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.471 [2024-11-06 15:11:32.673948] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.471 [2024-11-06 15:11:32.674022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.471 [2024-11-06 15:11:32.674029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.471 [2024-11-06 15:11:32.674033] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674037] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=4096, cccid=4 00:13:03.471 [2024-11-06 15:11:32.674042] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e4b0) on tqpair(0x13cfd30): expected_datao=0, payload_size=4096 00:13:03.471 [2024-11-06 15:11:32.674051] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674055] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674064] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674071] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674079] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.674132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.471 [2024-11-06 15:11:32.674151] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.471 [2024-11-06 15:11:32.674217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.471 [2024-11-06 15:11:32.674224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.471 [2024-11-06 15:11:32.674228] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674232] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=4096, cccid=4 00:13:03.471 [2024-11-06 15:11:32.674237] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e4b0) on tqpair(0x13cfd30): expected_datao=0, payload_size=4096 00:13:03.471 [2024-11-06 15:11:32.674245] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674250] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674293] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674320] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674326] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:03.471 [2024-11-06 15:11:32.674331] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:03.471 [2024-11-06 15:11:32.674337] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:03.471 [2024-11-06 15:11:32.674353] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.674370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.471 [2024-11-06 15:11:32.674378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674382] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.674393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.471 [2024-11-06 15:11:32.674419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.471 [2024-11-06 15:11:32.674427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e610, cid 5, qid 0 00:13:03.471 [2024-11-06 15:11:32.674496] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674503] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674507] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674511] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674526] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674530] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674534] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e610) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.674563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.471 [2024-11-06 15:11:32.674580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e610, cid 5, qid 0 00:13:03.471 [2024-11-06 15:11:32.674632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674643] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674647] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e610) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674679] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.674691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.471 [2024-11-06 15:11:32.674710] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e610, cid 5, qid 0 00:13:03.471 [2024-11-06 15:11:32.674767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674795] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674799] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e610) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cfd30) 00:13:03.471 [2024-11-06 15:11:32.674829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.471 [2024-11-06 15:11:32.674848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e610, cid 5, qid 0 00:13:03.471 [2024-11-06 15:11:32.674904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.471 [2024-11-06 15:11:32.674911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.471 [2024-11-06 15:11:32.674915] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e610) on tqpair=0x13cfd30 00:13:03.471 [2024-11-06 15:11:32.674935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.471 [2024-11-06 15:11:32.674940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.674944] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cfd30) 00:13:03.472 [2024-11-06 15:11:32.674951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.472 [2024-11-06 15:11:32.674959] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.674964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.674968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cfd30) 00:13:03.472 [2024-11-06 15:11:32.674975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.472 [2024-11-06 15:11:32.674983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.674987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.674992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13cfd30) 00:13:03.472 [2024-11-06 15:11:32.674998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.472 [2024-11-06 15:11:32.675006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675011] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675015] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13cfd30) 00:13:03.472 [2024-11-06 15:11:32.675022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.472 [2024-11-06 15:11:32.675041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e610, cid 5, qid 0 00:13:03.472 [2024-11-06 15:11:32.675048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e4b0, cid 4, qid 0 00:13:03.472 [2024-11-06 15:11:32.675054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e770, cid 6, qid 0 00:13:03.472 [2024-11-06 15:11:32.675059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e8d0, cid 7, qid 0 00:13:03.472 [2024-11-06 15:11:32.675203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.472 [2024-11-06 15:11:32.675211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.472 [2024-11-06 15:11:32.675215] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675220] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=8192, cccid=5 00:13:03.472 [2024-11-06 15:11:32.675226] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e610) on tqpair(0x13cfd30): expected_datao=0, payload_size=8192 00:13:03.472 [2024-11-06 15:11:32.675246] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675258] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.472 [2024-11-06 15:11:32.675271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.472 [2024-11-06 15:11:32.675274] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675279] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=512, cccid=4 00:13:03.472 [2024-11-06 15:11:32.675284] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e4b0) on tqpair(0x13cfd30): expected_datao=0, payload_size=512 00:13:03.472 [2024-11-06 15:11:32.675291] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675296] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675302] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.472 [2024-11-06 15:11:32.675308] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.472 [2024-11-06 15:11:32.675312] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675316] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=512, cccid=6 00:13:03.472 [2024-11-06 15:11:32.675321] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e770) on tqpair(0x13cfd30): expected_datao=0, payload_size=512 00:13:03.472 [2024-11-06 15:11:32.675329] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675333] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:03.472 [2024-11-06 15:11:32.675345] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:03.472 [2024-11-06 15:11:32.675349] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675353] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cfd30): datao=0, datal=4096, cccid=7 00:13:03.472 [2024-11-06 15:11:32.675358] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142e8d0) on tqpair(0x13cfd30): expected_datao=0, payload_size=4096 00:13:03.472 [2024-11-06 15:11:32.675366] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675370] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.472 [2024-11-06 15:11:32.675385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.472 [2024-11-06 15:11:32.675389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675394] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e610) on tqpair=0x13cfd30 00:13:03.472 [2024-11-06 15:11:32.675412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.472 [2024-11-06 15:11:32.675420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.472 [2024-11-06 15:11:32.675423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e4b0) on tqpair=0x13cfd30 00:13:03.472 [2024-11-06 15:11:32.675439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.472 [2024-11-06 15:11:32.675446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.472 [2024-11-06 15:11:32.675450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e770) on tqpair=0x13cfd30 00:13:03.472 [2024-11-06 15:11:32.675463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.472 [2024-11-06 15:11:32.675469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.472 [2024-11-06 15:11:32.675473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.472 [2024-11-06 15:11:32.675477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e8d0) on tqpair=0x13cfd30 00:13:03.472 ===================================================== 00:13:03.472 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.472 ===================================================== 00:13:03.472 Controller Capabilities/Features 00:13:03.472 ================================ 00:13:03.472 Vendor ID: 8086 00:13:03.472 Subsystem Vendor ID: 8086 00:13:03.472 Serial Number: SPDK00000000000001 00:13:03.472 Model Number: SPDK bdev Controller 00:13:03.472 Firmware Version: 24.01.1 00:13:03.472 Recommended Arb Burst: 6 00:13:03.472 IEEE OUI Identifier: e4 d2 5c 00:13:03.472 Multi-path I/O 00:13:03.472 May have multiple subsystem ports: Yes 00:13:03.472 May have multiple controllers: Yes 00:13:03.472 Associated with SR-IOV VF: No 00:13:03.472 Max Data Transfer Size: 131072 00:13:03.472 Max Number of Namespaces: 32 00:13:03.472 Max Number of I/O Queues: 127 00:13:03.472 NVMe Specification Version (VS): 1.3 00:13:03.472 NVMe Specification Version (Identify): 1.3 00:13:03.472 Maximum Queue Entries: 128 00:13:03.472 Contiguous Queues Required: Yes 00:13:03.472 Arbitration Mechanisms Supported 00:13:03.472 Weighted Round Robin: Not Supported 00:13:03.472 Vendor Specific: Not Supported 00:13:03.472 Reset Timeout: 15000 ms 00:13:03.472 Doorbell Stride: 4 bytes 00:13:03.472 NVM Subsystem Reset: Not Supported 00:13:03.472 Command Sets Supported 00:13:03.472 NVM Command Set: Supported 00:13:03.472 Boot Partition: Not Supported 00:13:03.472 Memory Page Size Minimum: 4096 bytes 00:13:03.472 Memory Page Size Maximum: 4096 bytes 00:13:03.472 Persistent Memory Region: Not Supported 00:13:03.472 Optional Asynchronous Events Supported 00:13:03.472 Namespace Attribute Notices: Supported 00:13:03.472 Firmware Activation Notices: Not Supported 00:13:03.472 ANA Change Notices: Not Supported 00:13:03.472 PLE Aggregate Log Change Notices: Not Supported 00:13:03.472 LBA Status Info Alert Notices: Not Supported 00:13:03.472 EGE Aggregate Log Change Notices: Not Supported 00:13:03.472 Normal NVM Subsystem Shutdown event: Not Supported 00:13:03.472 Zone Descriptor Change Notices: Not Supported 00:13:03.472 Discovery Log Change Notices: Not Supported 00:13:03.472 Controller Attributes 00:13:03.472 128-bit Host Identifier: Supported 00:13:03.472 Non-Operational Permissive Mode: Not Supported 00:13:03.472 NVM Sets: Not Supported 00:13:03.472 Read Recovery Levels: Not Supported 00:13:03.472 Endurance Groups: Not Supported 00:13:03.472 Predictable Latency Mode: Not Supported 00:13:03.472 Traffic Based Keep ALive: Not Supported 00:13:03.472 Namespace Granularity: Not Supported 00:13:03.472 SQ Associations: Not Supported 00:13:03.472 UUID List: Not Supported 00:13:03.472 Multi-Domain Subsystem: Not Supported 00:13:03.472 Fixed Capacity Management: Not Supported 00:13:03.472 Variable Capacity Management: Not Supported 00:13:03.472 Delete Endurance Group: Not Supported 00:13:03.472 Delete NVM Set: Not Supported 00:13:03.472 Extended LBA Formats Supported: Not Supported 00:13:03.472 Flexible Data Placement Supported: Not Supported 00:13:03.472 00:13:03.472 Controller Memory Buffer Support 00:13:03.472 ================================ 00:13:03.472 Supported: No 00:13:03.472 00:13:03.472 Persistent Memory Region Support 00:13:03.472 ================================ 00:13:03.472 Supported: No 00:13:03.472 00:13:03.472 Admin Command Set Attributes 00:13:03.472 ============================ 00:13:03.472 Security Send/Receive: Not Supported 00:13:03.472 Format NVM: Not Supported 00:13:03.472 Firmware Activate/Download: Not Supported 00:13:03.472 Namespace Management: Not Supported 00:13:03.472 Device Self-Test: Not Supported 00:13:03.472 Directives: Not Supported 00:13:03.472 NVMe-MI: Not Supported 00:13:03.472 Virtualization Management: Not Supported 00:13:03.473 Doorbell Buffer Config: Not Supported 00:13:03.473 Get LBA Status Capability: Not Supported 00:13:03.473 Command & Feature Lockdown Capability: Not Supported 00:13:03.473 Abort Command Limit: 4 00:13:03.473 Async Event Request Limit: 4 00:13:03.473 Number of Firmware Slots: N/A 00:13:03.473 Firmware Slot 1 Read-Only: N/A 00:13:03.473 Firmware Activation Without Reset: N/A 00:13:03.473 Multiple Update Detection Support: N/A 00:13:03.473 Firmware Update Granularity: No Information Provided 00:13:03.473 Per-Namespace SMART Log: No 00:13:03.473 Asymmetric Namespace Access Log Page: Not Supported 00:13:03.473 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:03.473 Command Effects Log Page: Supported 00:13:03.473 Get Log Page Extended Data: Supported 00:13:03.473 Telemetry Log Pages: Not Supported 00:13:03.473 Persistent Event Log Pages: Not Supported 00:13:03.473 Supported Log Pages Log Page: May Support 00:13:03.473 Commands Supported & Effects Log Page: Not Supported 00:13:03.473 Feature Identifiers & Effects Log Page:May Support 00:13:03.473 NVMe-MI Commands & Effects Log Page: May Support 00:13:03.473 Data Area 4 for Telemetry Log: Not Supported 00:13:03.473 Error Log Page Entries Supported: 128 00:13:03.473 Keep Alive: Supported 00:13:03.473 Keep Alive Granularity: 10000 ms 00:13:03.473 00:13:03.473 NVM Command Set Attributes 00:13:03.473 ========================== 00:13:03.473 Submission Queue Entry Size 00:13:03.473 Max: 64 00:13:03.473 Min: 64 00:13:03.473 Completion Queue Entry Size 00:13:03.473 Max: 16 00:13:03.473 Min: 16 00:13:03.473 Number of Namespaces: 32 00:13:03.473 Compare Command: Supported 00:13:03.473 Write Uncorrectable Command: Not Supported 00:13:03.473 Dataset Management Command: Supported 00:13:03.473 Write Zeroes Command: Supported 00:13:03.473 Set Features Save Field: Not Supported 00:13:03.473 Reservations: Supported 00:13:03.473 Timestamp: Not Supported 00:13:03.473 Copy: Supported 00:13:03.473 Volatile Write Cache: Present 00:13:03.473 Atomic Write Unit (Normal): 1 00:13:03.473 Atomic Write Unit (PFail): 1 00:13:03.473 Atomic Compare & Write Unit: 1 00:13:03.473 Fused Compare & Write: Supported 00:13:03.473 Scatter-Gather List 00:13:03.473 SGL Command Set: Supported 00:13:03.473 SGL Keyed: Supported 00:13:03.473 SGL Bit Bucket Descriptor: Not Supported 00:13:03.473 SGL Metadata Pointer: Not Supported 00:13:03.473 Oversized SGL: Not Supported 00:13:03.473 SGL Metadata Address: Not Supported 00:13:03.473 SGL Offset: Supported 00:13:03.473 Transport SGL Data Block: Not Supported 00:13:03.473 Replay Protected Memory Block: Not Supported 00:13:03.473 00:13:03.473 Firmware Slot Information 00:13:03.473 ========================= 00:13:03.473 Active slot: 1 00:13:03.473 Slot 1 Firmware Revision: 24.01.1 00:13:03.473 00:13:03.473 00:13:03.473 Commands Supported and Effects 00:13:03.473 ============================== 00:13:03.473 Admin Commands 00:13:03.473 -------------- 00:13:03.473 Get Log Page (02h): Supported 00:13:03.473 Identify (06h): Supported 00:13:03.473 Abort (08h): Supported 00:13:03.473 Set Features (09h): Supported 00:13:03.473 Get Features (0Ah): Supported 00:13:03.473 Asynchronous Event Request (0Ch): Supported 00:13:03.473 Keep Alive (18h): Supported 00:13:03.473 I/O Commands 00:13:03.473 ------------ 00:13:03.473 Flush (00h): Supported LBA-Change 00:13:03.473 Write (01h): Supported LBA-Change 00:13:03.473 Read (02h): Supported 00:13:03.473 Compare (05h): Supported 00:13:03.473 Write Zeroes (08h): Supported LBA-Change 00:13:03.473 Dataset Management (09h): Supported LBA-Change 00:13:03.473 Copy (19h): Supported LBA-Change 00:13:03.473 Unknown (79h): Supported LBA-Change 00:13:03.473 Unknown (7Ah): Supported 00:13:03.473 00:13:03.473 Error Log 00:13:03.473 ========= 00:13:03.473 00:13:03.473 Arbitration 00:13:03.473 =========== 00:13:03.473 Arbitration Burst: 1 00:13:03.473 00:13:03.473 Power Management 00:13:03.473 ================ 00:13:03.473 Number of Power States: 1 00:13:03.473 Current Power State: Power State #0 00:13:03.473 Power State #0: 00:13:03.473 Max Power: 0.00 W 00:13:03.473 Non-Operational State: Operational 00:13:03.473 Entry Latency: Not Reported 00:13:03.473 Exit Latency: Not Reported 00:13:03.473 Relative Read Throughput: 0 00:13:03.473 Relative Read Latency: 0 00:13:03.473 Relative Write Throughput: 0 00:13:03.473 Relative Write Latency: 0 00:13:03.473 Idle Power: Not Reported 00:13:03.473 Active Power: Not Reported 00:13:03.473 Non-Operational Permissive Mode: Not Supported 00:13:03.473 00:13:03.473 Health Information 00:13:03.473 ================== 00:13:03.473 Critical Warnings: 00:13:03.473 Available Spare Space: OK 00:13:03.473 Temperature: OK 00:13:03.473 Device Reliability: OK 00:13:03.473 Read Only: No 00:13:03.473 Volatile Memory Backup: OK 00:13:03.473 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:03.473 Temperature Threshold: [2024-11-06 15:11:32.675591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.675598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.675603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13cfd30) 00:13:03.473 [2024-11-06 15:11:32.675611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.473 [2024-11-06 15:11:32.675635] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e8d0, cid 7, qid 0 00:13:03.473 [2024-11-06 15:11:32.679710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.473 [2024-11-06 15:11:32.679729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.473 [2024-11-06 15:11:32.679734] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.679739] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e8d0) on tqpair=0x13cfd30 00:13:03.473 [2024-11-06 15:11:32.679780] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:03.473 [2024-11-06 15:11:32.679796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.473 [2024-11-06 15:11:32.679804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.473 [2024-11-06 15:11:32.679811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.473 [2024-11-06 15:11:32.679817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.473 [2024-11-06 15:11:32.679827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.679832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.679836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.473 [2024-11-06 15:11:32.679846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.473 [2024-11-06 15:11:32.679872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.473 [2024-11-06 15:11:32.679926] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.473 [2024-11-06 15:11:32.679933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.473 [2024-11-06 15:11:32.679937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.679942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.473 [2024-11-06 15:11:32.679951] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.679956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.679960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.473 [2024-11-06 15:11:32.679968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.473 [2024-11-06 15:11:32.679989] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.473 [2024-11-06 15:11:32.680055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.473 [2024-11-06 15:11:32.680062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.473 [2024-11-06 15:11:32.680066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.680070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.473 [2024-11-06 15:11:32.680076] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:03.473 [2024-11-06 15:11:32.680082] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:03.473 [2024-11-06 15:11:32.680092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.680097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.680101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.473 [2024-11-06 15:11:32.680109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.473 [2024-11-06 15:11:32.680126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.473 [2024-11-06 15:11:32.680177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.473 [2024-11-06 15:11:32.680184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.473 [2024-11-06 15:11:32.680188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.680193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.473 [2024-11-06 15:11:32.680205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.473 [2024-11-06 15:11:32.680210] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680286] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680414] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680419] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680527] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680641] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680703] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680750] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680761] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680765] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680782] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680870] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.680904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.680920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.680962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.680974] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.680979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.680983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.680995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681004] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.681012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.681029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.681081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.681088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.681092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.681108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.681125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.681141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.681193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.681205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.474 [2024-11-06 15:11:32.681210] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681214] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.474 [2024-11-06 15:11:32.681226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681231] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.474 [2024-11-06 15:11:32.681235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.474 [2024-11-06 15:11:32.681243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.474 [2024-11-06 15:11:32.681261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.474 [2024-11-06 15:11:32.681313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.474 [2024-11-06 15:11:32.681320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681324] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681328] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.681340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.681356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.681373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.681422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.681429] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681437] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.681449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.681465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.681482] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.681527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.681534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.681554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.681570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.681587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.681632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.681640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.681672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.681690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.681709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.681765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.681772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681780] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.681792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.681808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.681825] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.681874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.681882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.681902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.681911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.681919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.681935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.681987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.681994] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.681998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682003] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.682014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.682031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.682047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.682093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.682100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.682104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.682119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.682136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.682152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.682204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.682211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.682215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.682230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.682247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.682263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.682312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.682319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.682323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.682338] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682343] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.682355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.682371] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.682420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.682427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.682431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682435] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.682447] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.682463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.682480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.475 [2024-11-06 15:11:32.682529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.475 [2024-11-06 15:11:32.682536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.475 [2024-11-06 15:11:32.682540] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682544] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.475 [2024-11-06 15:11:32.682556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.475 [2024-11-06 15:11:32.682564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.475 [2024-11-06 15:11:32.682572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.475 [2024-11-06 15:11:32.682589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.682637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.682644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.682648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.682676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682681] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.682693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.682712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.682767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.682774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.682778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.682794] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.682810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.682827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.682872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.682879] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.682883] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682887] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.682899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.682908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.682916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.682932] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.682987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.682994] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.682998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.683014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.683030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.683047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.683092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.683099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.683103] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683107] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.683130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.683148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.683167] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.683214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.683221] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.683225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.683246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.683263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.683279] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.683328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.683340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.683345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.683361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.683378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.683395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.683447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.683454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.683458] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.683474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.683491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.683507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.683556] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.683563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.683567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683572] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.683583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.683592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.683600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.683616] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.687707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.687730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.687735] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.687740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.687755] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.687761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.687765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cfd30) 00:13:03.476 [2024-11-06 15:11:32.687774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.476 [2024-11-06 15:11:32.687799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142e350, cid 3, qid 0 00:13:03.476 [2024-11-06 15:11:32.687872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:03.476 [2024-11-06 15:11:32.687879] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:03.476 [2024-11-06 15:11:32.687883] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:03.476 [2024-11-06 15:11:32.687888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142e350) on tqpair=0x13cfd30 00:13:03.476 [2024-11-06 15:11:32.687897] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:13:03.476 0 Kelvin (-273 Celsius) 00:13:03.476 Available Spare: 0% 00:13:03.476 Available Spare Threshold: 0% 00:13:03.476 Life Percentage Used: 0% 00:13:03.476 Data Units Read: 0 00:13:03.476 Data Units Written: 0 00:13:03.476 Host Read Commands: 0 00:13:03.476 Host Write Commands: 0 00:13:03.476 Controller Busy Time: 0 minutes 00:13:03.476 Power Cycles: 0 00:13:03.476 Power On Hours: 0 hours 00:13:03.476 Unsafe Shutdowns: 0 00:13:03.476 Unrecoverable Media Errors: 0 00:13:03.476 Lifetime Error Log Entries: 0 00:13:03.476 Warning Temperature Time: 0 minutes 00:13:03.476 Critical Temperature Time: 0 minutes 00:13:03.476 00:13:03.476 Number of Queues 00:13:03.476 ================ 00:13:03.476 Number of I/O Submission Queues: 127 00:13:03.476 Number of I/O Completion Queues: 127 00:13:03.476 00:13:03.476 Active Namespaces 00:13:03.476 ================= 00:13:03.476 Namespace ID:1 00:13:03.476 Error Recovery Timeout: Unlimited 00:13:03.476 Command Set Identifier: NVM (00h) 00:13:03.476 Deallocate: Supported 00:13:03.476 Deallocated/Unwritten Error: Not Supported 00:13:03.476 Deallocated Read Value: Unknown 00:13:03.477 Deallocate in Write Zeroes: Not Supported 00:13:03.477 Deallocated Guard Field: 0xFFFF 00:13:03.477 Flush: Supported 00:13:03.477 Reservation: Supported 00:13:03.477 Namespace Sharing Capabilities: Multiple Controllers 00:13:03.477 Size (in LBAs): 131072 (0GiB) 00:13:03.477 Capacity (in LBAs): 131072 (0GiB) 00:13:03.477 Utilization (in LBAs): 131072 (0GiB) 00:13:03.477 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:03.477 EUI64: ABCDEF0123456789 00:13:03.477 UUID: 2d0d4b72-bf48-4dec-a182-ae90a2394b04 00:13:03.477 Thin Provisioning: Not Supported 00:13:03.477 Per-NS Atomic Units: Yes 00:13:03.477 Atomic Boundary Size (Normal): 0 00:13:03.477 Atomic Boundary Size (PFail): 0 00:13:03.477 Atomic Boundary Offset: 0 00:13:03.477 Maximum Single Source Range Length: 65535 00:13:03.477 Maximum Copy Length: 65535 00:13:03.477 Maximum Source Range Count: 1 00:13:03.477 NGUID/EUI64 Never Reused: No 00:13:03.477 Namespace Write Protected: No 00:13:03.477 Number of LBA Formats: 1 00:13:03.477 Current LBA Format: LBA Format #00 00:13:03.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:03.477 00:13:03.477 15:11:32 -- host/identify.sh@51 -- # sync 00:13:03.735 15:11:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.735 15:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.735 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.735 15:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.735 15:11:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:03.735 15:11:32 -- host/identify.sh@56 -- # nvmftestfini 00:13:03.735 15:11:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:03.735 15:11:32 -- nvmf/common.sh@116 -- # sync 00:13:03.735 15:11:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:03.735 15:11:32 -- nvmf/common.sh@119 -- # set +e 00:13:03.735 15:11:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:03.735 15:11:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:03.735 rmmod nvme_tcp 00:13:03.735 rmmod nvme_fabrics 00:13:03.735 rmmod nvme_keyring 00:13:03.735 15:11:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:03.735 15:11:32 -- nvmf/common.sh@123 -- # set -e 00:13:03.735 15:11:32 -- nvmf/common.sh@124 -- # return 0 00:13:03.735 15:11:32 -- nvmf/common.sh@477 -- # '[' -n 68457 ']' 00:13:03.735 15:11:32 -- nvmf/common.sh@478 -- # killprocess 68457 00:13:03.735 15:11:32 -- common/autotest_common.sh@936 -- # '[' -z 68457 ']' 00:13:03.735 15:11:32 -- common/autotest_common.sh@940 -- # kill -0 68457 00:13:03.735 15:11:32 -- common/autotest_common.sh@941 -- # uname 00:13:03.735 15:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.735 15:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68457 00:13:03.736 killing process with pid 68457 00:13:03.736 15:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:03.736 15:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:03.736 15:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68457' 00:13:03.736 15:11:32 -- common/autotest_common.sh@955 -- # kill 68457 00:13:03.736 [2024-11-06 15:11:32.850516] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:03.736 15:11:32 -- common/autotest_common.sh@960 -- # wait 68457 00:13:03.994 15:11:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:03.994 15:11:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:03.994 15:11:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:03.994 15:11:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.994 15:11:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:03.994 15:11:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.994 15:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.994 15:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.994 15:11:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:03.994 00:13:03.994 real 0m2.495s 00:13:03.994 user 0m6.736s 00:13:03.994 sys 0m0.572s 00:13:03.994 ************************************ 00:13:03.994 END TEST nvmf_identify 00:13:03.994 ************************************ 00:13:03.994 15:11:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:03.994 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.994 15:11:33 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:03.994 15:11:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:03.994 15:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.994 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.994 ************************************ 00:13:03.994 START TEST nvmf_perf 00:13:03.994 ************************************ 00:13:03.994 15:11:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:03.994 * Looking for test storage... 00:13:03.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:03.994 15:11:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:03.994 15:11:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:03.994 15:11:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:04.253 15:11:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:04.253 15:11:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:04.253 15:11:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:04.253 15:11:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:04.253 15:11:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:04.253 15:11:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:04.253 15:11:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.253 15:11:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:04.253 15:11:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:04.253 15:11:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:04.253 15:11:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:04.253 15:11:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:04.253 15:11:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:04.253 15:11:33 -- scripts/common.sh@344 -- # : 1 00:13:04.253 15:11:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:04.253 15:11:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.253 15:11:33 -- scripts/common.sh@364 -- # decimal 1 00:13:04.253 15:11:33 -- scripts/common.sh@352 -- # local d=1 00:13:04.253 15:11:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.253 15:11:33 -- scripts/common.sh@354 -- # echo 1 00:13:04.253 15:11:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:04.253 15:11:33 -- scripts/common.sh@365 -- # decimal 2 00:13:04.253 15:11:33 -- scripts/common.sh@352 -- # local d=2 00:13:04.253 15:11:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.253 15:11:33 -- scripts/common.sh@354 -- # echo 2 00:13:04.253 15:11:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:04.253 15:11:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:04.253 15:11:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:04.253 15:11:33 -- scripts/common.sh@367 -- # return 0 00:13:04.253 15:11:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.253 15:11:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:04.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.253 --rc genhtml_branch_coverage=1 00:13:04.253 --rc genhtml_function_coverage=1 00:13:04.253 --rc genhtml_legend=1 00:13:04.253 --rc geninfo_all_blocks=1 00:13:04.253 --rc geninfo_unexecuted_blocks=1 00:13:04.253 00:13:04.253 ' 00:13:04.254 15:11:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.254 --rc genhtml_branch_coverage=1 00:13:04.254 --rc genhtml_function_coverage=1 00:13:04.254 --rc genhtml_legend=1 00:13:04.254 --rc geninfo_all_blocks=1 00:13:04.254 --rc geninfo_unexecuted_blocks=1 00:13:04.254 00:13:04.254 ' 00:13:04.254 15:11:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.254 --rc genhtml_branch_coverage=1 00:13:04.254 --rc genhtml_function_coverage=1 00:13:04.254 --rc genhtml_legend=1 00:13:04.254 --rc geninfo_all_blocks=1 00:13:04.254 --rc geninfo_unexecuted_blocks=1 00:13:04.254 00:13:04.254 ' 00:13:04.254 15:11:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.254 --rc genhtml_branch_coverage=1 00:13:04.254 --rc genhtml_function_coverage=1 00:13:04.254 --rc genhtml_legend=1 00:13:04.254 --rc geninfo_all_blocks=1 00:13:04.254 --rc geninfo_unexecuted_blocks=1 00:13:04.254 00:13:04.254 ' 00:13:04.254 15:11:33 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:04.254 15:11:33 -- nvmf/common.sh@7 -- # uname -s 00:13:04.254 15:11:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.254 15:11:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.254 15:11:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.254 15:11:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.254 15:11:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.254 15:11:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.254 15:11:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.254 15:11:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.254 15:11:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.254 15:11:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.254 15:11:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:13:04.254 15:11:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:13:04.254 15:11:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.254 15:11:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.254 15:11:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:04.254 15:11:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.254 15:11:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.254 15:11:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.254 15:11:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.254 15:11:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.254 15:11:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.254 15:11:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.254 15:11:33 -- paths/export.sh@5 -- # export PATH 00:13:04.254 15:11:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.254 15:11:33 -- nvmf/common.sh@46 -- # : 0 00:13:04.254 15:11:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:04.254 15:11:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:04.254 15:11:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:04.254 15:11:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.254 15:11:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.254 15:11:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:04.254 15:11:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:04.254 15:11:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:04.254 15:11:33 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:04.254 15:11:33 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:04.254 15:11:33 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:04.254 15:11:33 -- host/perf.sh@17 -- # nvmftestinit 00:13:04.254 15:11:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:04.254 15:11:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.254 15:11:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:04.254 15:11:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:04.254 15:11:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:04.254 15:11:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.254 15:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.254 15:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.254 15:11:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:04.254 15:11:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:04.254 15:11:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:04.254 15:11:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:04.254 15:11:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:04.254 15:11:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:04.254 15:11:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.254 15:11:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.254 15:11:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:04.254 15:11:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:04.254 15:11:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:04.254 15:11:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:04.254 15:11:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:04.254 15:11:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.254 15:11:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:04.254 15:11:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:04.254 15:11:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:04.254 15:11:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:04.254 15:11:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:04.254 15:11:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:04.254 Cannot find device "nvmf_tgt_br" 00:13:04.254 15:11:33 -- nvmf/common.sh@154 -- # true 00:13:04.254 15:11:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.254 Cannot find device "nvmf_tgt_br2" 00:13:04.254 15:11:33 -- nvmf/common.sh@155 -- # true 00:13:04.254 15:11:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:04.254 15:11:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:04.254 Cannot find device "nvmf_tgt_br" 00:13:04.254 15:11:33 -- nvmf/common.sh@157 -- # true 00:13:04.254 15:11:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:04.254 Cannot find device "nvmf_tgt_br2" 00:13:04.254 15:11:33 -- nvmf/common.sh@158 -- # true 00:13:04.254 15:11:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:04.254 15:11:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:04.254 15:11:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:04.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.254 15:11:33 -- nvmf/common.sh@161 -- # true 00:13:04.255 15:11:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:04.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.255 15:11:33 -- nvmf/common.sh@162 -- # true 00:13:04.255 15:11:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:04.255 15:11:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:04.255 15:11:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:04.255 15:11:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:04.255 15:11:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:04.255 15:11:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:04.514 15:11:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:04.514 15:11:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:04.514 15:11:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:04.514 15:11:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:04.514 15:11:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:04.514 15:11:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:04.514 15:11:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:04.514 15:11:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:04.514 15:11:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:04.514 15:11:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:04.514 15:11:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:04.514 15:11:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:04.514 15:11:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:04.514 15:11:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:04.514 15:11:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:04.514 15:11:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:04.514 15:11:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:04.514 15:11:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:04.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:13:04.514 00:13:04.514 --- 10.0.0.2 ping statistics --- 00:13:04.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.514 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:04.514 15:11:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:04.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:04.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:04.514 00:13:04.514 --- 10.0.0.3 ping statistics --- 00:13:04.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.514 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:04.514 15:11:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:04.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:04.514 00:13:04.514 --- 10.0.0.1 ping statistics --- 00:13:04.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.514 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:04.514 15:11:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.514 15:11:33 -- nvmf/common.sh@421 -- # return 0 00:13:04.514 15:11:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:04.514 15:11:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.514 15:11:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:04.514 15:11:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:04.514 15:11:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.514 15:11:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:04.514 15:11:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:04.514 15:11:33 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:04.514 15:11:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:04.514 15:11:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:04.514 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.514 15:11:33 -- nvmf/common.sh@469 -- # nvmfpid=68667 00:13:04.514 15:11:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.514 15:11:33 -- nvmf/common.sh@470 -- # waitforlisten 68667 00:13:04.514 15:11:33 -- common/autotest_common.sh@829 -- # '[' -z 68667 ']' 00:13:04.514 15:11:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.514 15:11:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.514 15:11:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.514 15:11:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.514 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.514 [2024-11-06 15:11:33.758996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:04.514 [2024-11-06 15:11:33.759282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.772 [2024-11-06 15:11:33.901165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.773 [2024-11-06 15:11:33.970018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:04.773 [2024-11-06 15:11:33.970421] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.773 [2024-11-06 15:11:33.970584] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.773 [2024-11-06 15:11:33.970768] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.773 [2024-11-06 15:11:33.971199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.773 [2024-11-06 15:11:33.971327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.773 [2024-11-06 15:11:33.971382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.773 [2024-11-06 15:11:33.971579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.706 15:11:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.706 15:11:34 -- common/autotest_common.sh@862 -- # return 0 00:13:05.706 15:11:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:05.706 15:11:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:05.706 15:11:34 -- common/autotest_common.sh@10 -- # set +x 00:13:05.706 15:11:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.706 15:11:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:05.706 15:11:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:05.965 15:11:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:05.965 15:11:35 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:06.532 15:11:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:06.533 15:11:35 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:06.791 15:11:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:06.791 15:11:35 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:06.791 15:11:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:06.791 15:11:35 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:06.791 15:11:35 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:07.050 [2024-11-06 15:11:36.081859] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.050 15:11:36 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:07.310 15:11:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:07.310 15:11:36 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.569 15:11:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:07.569 15:11:36 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:07.828 15:11:36 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.828 [2024-11-06 15:11:37.055181] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.828 15:11:37 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:08.087 15:11:37 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:08.087 15:11:37 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:08.087 15:11:37 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:08.087 15:11:37 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:09.465 Initializing NVMe Controllers 00:13:09.465 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:09.465 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:09.465 Initialization complete. Launching workers. 00:13:09.465 ======================================================== 00:13:09.465 Latency(us) 00:13:09.466 Device Information : IOPS MiB/s Average min max 00:13:09.466 PCIE (0000:00:06.0) NSID 1 from core 0: 23130.91 90.36 1383.19 355.13 8042.74 00:13:09.466 ======================================================== 00:13:09.466 Total : 23130.91 90.36 1383.19 355.13 8042.74 00:13:09.466 00:13:09.466 15:11:38 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:10.843 Initializing NVMe Controllers 00:13:10.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:10.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:10.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:10.843 Initialization complete. Launching workers. 00:13:10.843 ======================================================== 00:13:10.843 Latency(us) 00:13:10.843 Device Information : IOPS MiB/s Average min max 00:13:10.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3623.94 14.16 275.64 99.06 7158.49 00:13:10.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8038.64 4867.04 12035.10 00:13:10.843 ======================================================== 00:13:10.843 Total : 3748.94 14.64 534.48 99.06 12035.10 00:13:10.843 00:13:10.843 15:11:39 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:12.219 Initializing NVMe Controllers 00:13:12.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:12.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:12.219 Initialization complete. Launching workers. 00:13:12.219 ======================================================== 00:13:12.219 Latency(us) 00:13:12.219 Device Information : IOPS MiB/s Average min max 00:13:12.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8569.89 33.48 3735.16 430.55 12479.21 00:13:12.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3936.95 15.38 8174.83 5769.70 16719.44 00:13:12.219 ======================================================== 00:13:12.219 Total : 12506.84 48.85 5132.69 430.55 16719.44 00:13:12.219 00:13:12.219 15:11:41 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:12.219 15:11:41 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:14.752 Initializing NVMe Controllers 00:13:14.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.752 Controller IO queue size 128, less than required. 00:13:14.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:14.752 Controller IO queue size 128, less than required. 00:13:14.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:14.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:14.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:14.752 Initialization complete. Launching workers. 00:13:14.752 ======================================================== 00:13:14.752 Latency(us) 00:13:14.753 Device Information : IOPS MiB/s Average min max 00:13:14.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1901.84 475.46 68832.44 46611.84 114287.22 00:13:14.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.45 151.36 214452.76 99613.56 348256.68 00:13:14.753 ======================================================== 00:13:14.753 Total : 2507.28 626.82 103996.19 46611.84 348256.68 00:13:14.753 00:13:14.753 15:11:43 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:14.753 No valid NVMe controllers or AIO or URING devices found 00:13:14.753 Initializing NVMe Controllers 00:13:14.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.753 Controller IO queue size 128, less than required. 00:13:14.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:14.753 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:14.753 Controller IO queue size 128, less than required. 00:13:14.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:14.753 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:14.753 WARNING: Some requested NVMe devices were skipped 00:13:14.753 15:11:43 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:17.284 Initializing NVMe Controllers 00:13:17.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.284 Controller IO queue size 128, less than required. 00:13:17.284 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:17.284 Controller IO queue size 128, less than required. 00:13:17.284 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:17.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:17.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:17.284 Initialization complete. Launching workers. 00:13:17.284 00:13:17.284 ==================== 00:13:17.284 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:17.284 TCP transport: 00:13:17.284 polls: 7515 00:13:17.284 idle_polls: 0 00:13:17.284 sock_completions: 7515 00:13:17.284 nvme_completions: 6750 00:13:17.284 submitted_requests: 10356 00:13:17.284 queued_requests: 1 00:13:17.284 00:13:17.284 ==================== 00:13:17.284 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:17.284 TCP transport: 00:13:17.284 polls: 8226 00:13:17.284 idle_polls: 0 00:13:17.284 sock_completions: 8226 00:13:17.284 nvme_completions: 6702 00:13:17.284 submitted_requests: 10325 00:13:17.284 queued_requests: 1 00:13:17.284 ======================================================== 00:13:17.284 Latency(us) 00:13:17.284 Device Information : IOPS MiB/s Average min max 00:13:17.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1747.30 436.82 74418.27 42488.80 127832.01 00:13:17.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1734.82 433.71 74271.28 25214.31 119179.90 00:13:17.284 ======================================================== 00:13:17.284 Total : 3482.12 870.53 74345.03 25214.31 127832.01 00:13:17.284 00:13:17.284 15:11:46 -- host/perf.sh@66 -- # sync 00:13:17.284 15:11:46 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.543 15:11:46 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:13:17.543 15:11:46 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:13:17.543 15:11:46 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:13:17.801 15:11:46 -- host/perf.sh@72 -- # ls_guid=b2ede75c-7a22-4df1-81f7-26e7ae4de7de 00:13:17.801 15:11:46 -- host/perf.sh@73 -- # get_lvs_free_mb b2ede75c-7a22-4df1-81f7-26e7ae4de7de 00:13:17.801 15:11:46 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b2ede75c-7a22-4df1-81f7-26e7ae4de7de 00:13:17.801 15:11:46 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:17.801 15:11:46 -- common/autotest_common.sh@1355 -- # local fc 00:13:17.801 15:11:46 -- common/autotest_common.sh@1356 -- # local cs 00:13:17.801 15:11:46 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:18.060 15:11:47 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:18.060 { 00:13:18.060 "uuid": "b2ede75c-7a22-4df1-81f7-26e7ae4de7de", 00:13:18.060 "name": "lvs_0", 00:13:18.060 "base_bdev": "Nvme0n1", 00:13:18.060 "total_data_clusters": 1278, 00:13:18.060 "free_clusters": 1278, 00:13:18.060 "block_size": 4096, 00:13:18.060 "cluster_size": 4194304 00:13:18.060 } 00:13:18.060 ]' 00:13:18.060 15:11:47 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b2ede75c-7a22-4df1-81f7-26e7ae4de7de") .free_clusters' 00:13:18.060 15:11:47 -- common/autotest_common.sh@1358 -- # fc=1278 00:13:18.060 15:11:47 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b2ede75c-7a22-4df1-81f7-26e7ae4de7de") .cluster_size' 00:13:18.318 5112 00:13:18.318 15:11:47 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:18.318 15:11:47 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:13:18.318 15:11:47 -- common/autotest_common.sh@1363 -- # echo 5112 00:13:18.318 15:11:47 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:13:18.318 15:11:47 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b2ede75c-7a22-4df1-81f7-26e7ae4de7de lbd_0 5112 00:13:18.576 15:11:47 -- host/perf.sh@80 -- # lb_guid=a0fb85a8-9312-4482-ae1a-95f3150e2367 00:13:18.576 15:11:47 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore a0fb85a8-9312-4482-ae1a-95f3150e2367 lvs_n_0 00:13:18.835 15:11:48 -- host/perf.sh@83 -- # ls_nested_guid=58ab8626-1c58-4756-9c38-9d84f16e7d18 00:13:18.835 15:11:48 -- host/perf.sh@84 -- # get_lvs_free_mb 58ab8626-1c58-4756-9c38-9d84f16e7d18 00:13:18.835 15:11:48 -- common/autotest_common.sh@1353 -- # local lvs_uuid=58ab8626-1c58-4756-9c38-9d84f16e7d18 00:13:18.835 15:11:48 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:18.835 15:11:48 -- common/autotest_common.sh@1355 -- # local fc 00:13:18.835 15:11:48 -- common/autotest_common.sh@1356 -- # local cs 00:13:18.835 15:11:48 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:19.093 15:11:48 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:19.093 { 00:13:19.093 "uuid": "b2ede75c-7a22-4df1-81f7-26e7ae4de7de", 00:13:19.093 "name": "lvs_0", 00:13:19.093 "base_bdev": "Nvme0n1", 00:13:19.093 "total_data_clusters": 1278, 00:13:19.093 "free_clusters": 0, 00:13:19.093 "block_size": 4096, 00:13:19.093 "cluster_size": 4194304 00:13:19.093 }, 00:13:19.093 { 00:13:19.093 "uuid": "58ab8626-1c58-4756-9c38-9d84f16e7d18", 00:13:19.093 "name": "lvs_n_0", 00:13:19.093 "base_bdev": "a0fb85a8-9312-4482-ae1a-95f3150e2367", 00:13:19.093 "total_data_clusters": 1276, 00:13:19.093 "free_clusters": 1276, 00:13:19.093 "block_size": 4096, 00:13:19.093 "cluster_size": 4194304 00:13:19.093 } 00:13:19.093 ]' 00:13:19.093 15:11:48 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="58ab8626-1c58-4756-9c38-9d84f16e7d18") .free_clusters' 00:13:19.093 15:11:48 -- common/autotest_common.sh@1358 -- # fc=1276 00:13:19.093 15:11:48 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="58ab8626-1c58-4756-9c38-9d84f16e7d18") .cluster_size' 00:13:19.093 5104 00:13:19.093 15:11:48 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:19.093 15:11:48 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:13:19.093 15:11:48 -- common/autotest_common.sh@1363 -- # echo 5104 00:13:19.093 15:11:48 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:13:19.093 15:11:48 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58ab8626-1c58-4756-9c38-9d84f16e7d18 lbd_nest_0 5104 00:13:19.660 15:11:48 -- host/perf.sh@88 -- # lb_nested_guid=2828da94-3129-469e-94cb-8737f35939ae 00:13:19.660 15:11:48 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:19.660 15:11:48 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:13:19.660 15:11:48 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2828da94-3129-469e-94cb-8737f35939ae 00:13:19.918 15:11:49 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.176 15:11:49 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:13:20.176 15:11:49 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:13:20.176 15:11:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:20.176 15:11:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:20.176 15:11:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:20.435 No valid NVMe controllers or AIO or URING devices found 00:13:20.693 Initializing NVMe Controllers 00:13:20.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:20.693 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:20.693 WARNING: Some requested NVMe devices were skipped 00:13:20.693 15:11:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:20.693 15:11:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:32.898 Initializing NVMe Controllers 00:13:32.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:32.898 Initialization complete. Launching workers. 00:13:32.898 ======================================================== 00:13:32.898 Latency(us) 00:13:32.898 Device Information : IOPS MiB/s Average min max 00:13:32.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1063.50 132.94 940.23 321.45 9134.04 00:13:32.898 ======================================================== 00:13:32.898 Total : 1063.50 132.94 940.23 321.45 9134.04 00:13:32.898 00:13:32.898 15:11:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:32.898 15:11:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:32.898 15:11:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:32.898 No valid NVMe controllers or AIO or URING devices found 00:13:32.898 Initializing NVMe Controllers 00:13:32.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.898 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:32.898 WARNING: Some requested NVMe devices were skipped 00:13:32.898 15:12:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:32.898 15:12:00 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:42.873 Initializing NVMe Controllers 00:13:42.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:42.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:42.873 Initialization complete. Launching workers. 00:13:42.873 ======================================================== 00:13:42.873 Latency(us) 00:13:42.873 Device Information : IOPS MiB/s Average min max 00:13:42.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1330.56 166.32 24079.12 6220.04 63956.83 00:13:42.873 ======================================================== 00:13:42.873 Total : 1330.56 166.32 24079.12 6220.04 63956.83 00:13:42.873 00:13:42.873 15:12:10 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:42.873 15:12:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:42.873 15:12:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:42.873 No valid NVMe controllers or AIO or URING devices found 00:13:42.873 Initializing NVMe Controllers 00:13:42.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:42.873 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:42.873 WARNING: Some requested NVMe devices were skipped 00:13:42.873 15:12:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:42.873 15:12:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:52.853 Initializing NVMe Controllers 00:13:52.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.854 Controller IO queue size 128, less than required. 00:13:52.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:52.854 Initialization complete. Launching workers. 00:13:52.854 ======================================================== 00:13:52.854 Latency(us) 00:13:52.854 Device Information : IOPS MiB/s Average min max 00:13:52.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4043.89 505.49 31661.16 12001.86 64159.23 00:13:52.854 ======================================================== 00:13:52.854 Total : 4043.89 505.49 31661.16 12001.86 64159.23 00:13:52.854 00:13:52.854 15:12:21 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.854 15:12:21 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2828da94-3129-469e-94cb-8737f35939ae 00:13:52.854 15:12:21 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:13:52.854 15:12:22 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a0fb85a8-9312-4482-ae1a-95f3150e2367 00:13:53.113 15:12:22 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:13:53.372 15:12:22 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:53.372 15:12:22 -- host/perf.sh@114 -- # nvmftestfini 00:13:53.372 15:12:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:53.372 15:12:22 -- nvmf/common.sh@116 -- # sync 00:13:53.372 15:12:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:53.372 15:12:22 -- nvmf/common.sh@119 -- # set +e 00:13:53.372 15:12:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:53.372 15:12:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:53.372 rmmod nvme_tcp 00:13:53.632 rmmod nvme_fabrics 00:13:53.632 rmmod nvme_keyring 00:13:53.632 15:12:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:53.632 15:12:22 -- nvmf/common.sh@123 -- # set -e 00:13:53.632 15:12:22 -- nvmf/common.sh@124 -- # return 0 00:13:53.632 15:12:22 -- nvmf/common.sh@477 -- # '[' -n 68667 ']' 00:13:53.632 15:12:22 -- nvmf/common.sh@478 -- # killprocess 68667 00:13:53.632 15:12:22 -- common/autotest_common.sh@936 -- # '[' -z 68667 ']' 00:13:53.632 15:12:22 -- common/autotest_common.sh@940 -- # kill -0 68667 00:13:53.632 15:12:22 -- common/autotest_common.sh@941 -- # uname 00:13:53.632 15:12:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:53.632 15:12:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68667 00:13:53.632 killing process with pid 68667 00:13:53.632 15:12:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:53.632 15:12:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:53.632 15:12:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68667' 00:13:53.632 15:12:22 -- common/autotest_common.sh@955 -- # kill 68667 00:13:53.632 15:12:22 -- common/autotest_common.sh@960 -- # wait 68667 00:13:54.200 15:12:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:54.200 15:12:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:54.200 15:12:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:54.200 15:12:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.200 15:12:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:54.200 15:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.201 15:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.201 15:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.201 15:12:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:54.201 00:13:54.201 real 0m50.157s 00:13:54.201 user 3m9.093s 00:13:54.201 sys 0m12.625s 00:13:54.201 15:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:54.201 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:13:54.201 ************************************ 00:13:54.201 END TEST nvmf_perf 00:13:54.201 ************************************ 00:13:54.201 15:12:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:54.201 15:12:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:54.201 15:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.201 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:13:54.201 ************************************ 00:13:54.201 START TEST nvmf_fio_host 00:13:54.201 ************************************ 00:13:54.201 15:12:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:54.201 * Looking for test storage... 00:13:54.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:54.201 15:12:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:54.201 15:12:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:54.201 15:12:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:54.460 15:12:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:54.460 15:12:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:54.460 15:12:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:54.460 15:12:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:54.460 15:12:23 -- scripts/common.sh@335 -- # IFS=.-: 00:13:54.460 15:12:23 -- scripts/common.sh@335 -- # read -ra ver1 00:13:54.460 15:12:23 -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.460 15:12:23 -- scripts/common.sh@336 -- # read -ra ver2 00:13:54.460 15:12:23 -- scripts/common.sh@337 -- # local 'op=<' 00:13:54.460 15:12:23 -- scripts/common.sh@339 -- # ver1_l=2 00:13:54.460 15:12:23 -- scripts/common.sh@340 -- # ver2_l=1 00:13:54.460 15:12:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:54.460 15:12:23 -- scripts/common.sh@343 -- # case "$op" in 00:13:54.460 15:12:23 -- scripts/common.sh@344 -- # : 1 00:13:54.460 15:12:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:54.460 15:12:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.460 15:12:23 -- scripts/common.sh@364 -- # decimal 1 00:13:54.460 15:12:23 -- scripts/common.sh@352 -- # local d=1 00:13:54.460 15:12:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.460 15:12:23 -- scripts/common.sh@354 -- # echo 1 00:13:54.460 15:12:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:54.460 15:12:23 -- scripts/common.sh@365 -- # decimal 2 00:13:54.460 15:12:23 -- scripts/common.sh@352 -- # local d=2 00:13:54.461 15:12:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.461 15:12:23 -- scripts/common.sh@354 -- # echo 2 00:13:54.461 15:12:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:54.461 15:12:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:54.461 15:12:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:54.461 15:12:23 -- scripts/common.sh@367 -- # return 0 00:13:54.461 15:12:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.461 15:12:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:54.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.461 --rc genhtml_branch_coverage=1 00:13:54.461 --rc genhtml_function_coverage=1 00:13:54.461 --rc genhtml_legend=1 00:13:54.461 --rc geninfo_all_blocks=1 00:13:54.461 --rc geninfo_unexecuted_blocks=1 00:13:54.461 00:13:54.461 ' 00:13:54.461 15:12:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:54.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.461 --rc genhtml_branch_coverage=1 00:13:54.461 --rc genhtml_function_coverage=1 00:13:54.461 --rc genhtml_legend=1 00:13:54.461 --rc geninfo_all_blocks=1 00:13:54.461 --rc geninfo_unexecuted_blocks=1 00:13:54.461 00:13:54.461 ' 00:13:54.461 15:12:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:54.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.461 --rc genhtml_branch_coverage=1 00:13:54.461 --rc genhtml_function_coverage=1 00:13:54.461 --rc genhtml_legend=1 00:13:54.461 --rc geninfo_all_blocks=1 00:13:54.461 --rc geninfo_unexecuted_blocks=1 00:13:54.461 00:13:54.461 ' 00:13:54.461 15:12:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:54.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.461 --rc genhtml_branch_coverage=1 00:13:54.461 --rc genhtml_function_coverage=1 00:13:54.461 --rc genhtml_legend=1 00:13:54.461 --rc geninfo_all_blocks=1 00:13:54.461 --rc geninfo_unexecuted_blocks=1 00:13:54.461 00:13:54.461 ' 00:13:54.461 15:12:23 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.461 15:12:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.461 15:12:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.461 15:12:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.461 15:12:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- paths/export.sh@5 -- # export PATH 00:13:54.461 15:12:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.461 15:12:23 -- nvmf/common.sh@7 -- # uname -s 00:13:54.461 15:12:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.461 15:12:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.461 15:12:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.461 15:12:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.461 15:12:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.461 15:12:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.461 15:12:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.461 15:12:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.461 15:12:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.461 15:12:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.461 15:12:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:13:54.461 15:12:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:13:54.461 15:12:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.461 15:12:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.461 15:12:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.461 15:12:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.461 15:12:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.461 15:12:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.461 15:12:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.461 15:12:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- paths/export.sh@5 -- # export PATH 00:13:54.461 15:12:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.461 15:12:23 -- nvmf/common.sh@46 -- # : 0 00:13:54.461 15:12:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:54.461 15:12:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:54.461 15:12:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:54.461 15:12:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.461 15:12:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.461 15:12:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:54.461 15:12:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:54.461 15:12:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:54.461 15:12:23 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:54.461 15:12:23 -- host/fio.sh@14 -- # nvmftestinit 00:13:54.461 15:12:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:54.461 15:12:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.461 15:12:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:54.461 15:12:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:54.461 15:12:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:54.461 15:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.461 15:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.461 15:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.461 15:12:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:54.461 15:12:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:54.461 15:12:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:54.461 15:12:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:54.461 15:12:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:54.461 15:12:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:54.461 15:12:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.461 15:12:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.461 15:12:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.461 15:12:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:54.461 15:12:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.461 15:12:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.461 15:12:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.461 15:12:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.461 15:12:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.461 15:12:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.461 15:12:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.461 15:12:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.461 15:12:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:54.461 15:12:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:54.461 Cannot find device "nvmf_tgt_br" 00:13:54.461 15:12:23 -- nvmf/common.sh@154 -- # true 00:13:54.461 15:12:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.461 Cannot find device "nvmf_tgt_br2" 00:13:54.461 15:12:23 -- nvmf/common.sh@155 -- # true 00:13:54.461 15:12:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:54.462 15:12:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:54.462 Cannot find device "nvmf_tgt_br" 00:13:54.462 15:12:23 -- nvmf/common.sh@157 -- # true 00:13:54.462 15:12:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:54.462 Cannot find device "nvmf_tgt_br2" 00:13:54.462 15:12:23 -- nvmf/common.sh@158 -- # true 00:13:54.462 15:12:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:54.462 15:12:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:54.462 15:12:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.462 15:12:23 -- nvmf/common.sh@161 -- # true 00:13:54.462 15:12:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.462 15:12:23 -- nvmf/common.sh@162 -- # true 00:13:54.462 15:12:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.462 15:12:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.721 15:12:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.721 15:12:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.721 15:12:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.721 15:12:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.721 15:12:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.721 15:12:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:54.721 15:12:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:54.721 15:12:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:54.721 15:12:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:54.721 15:12:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:54.721 15:12:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:54.721 15:12:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.721 15:12:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.721 15:12:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:54.721 15:12:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:54.721 15:12:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:54.721 15:12:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:54.721 15:12:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:54.721 15:12:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:54.721 15:12:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:54.721 15:12:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:54.721 15:12:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:54.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:13:54.721 00:13:54.721 --- 10.0.0.2 ping statistics --- 00:13:54.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.721 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:54.721 15:12:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:54.721 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:54.721 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:54.721 00:13:54.721 --- 10.0.0.3 ping statistics --- 00:13:54.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.721 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:54.721 15:12:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:54.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:54.721 00:13:54.721 --- 10.0.0.1 ping statistics --- 00:13:54.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.721 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:54.721 15:12:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.721 15:12:23 -- nvmf/common.sh@421 -- # return 0 00:13:54.721 15:12:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:54.721 15:12:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.721 15:12:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:54.721 15:12:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:54.721 15:12:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.721 15:12:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:54.721 15:12:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:54.721 15:12:23 -- host/fio.sh@16 -- # [[ y != y ]] 00:13:54.722 15:12:23 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:54.722 15:12:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.722 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:13:54.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.722 15:12:23 -- host/fio.sh@24 -- # nvmfpid=69499 00:13:54.722 15:12:23 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.722 15:12:23 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:54.722 15:12:23 -- host/fio.sh@28 -- # waitforlisten 69499 00:13:54.722 15:12:23 -- common/autotest_common.sh@829 -- # '[' -z 69499 ']' 00:13:54.722 15:12:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.722 15:12:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.722 15:12:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.722 15:12:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.722 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:13:54.722 [2024-11-06 15:12:23.985743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:54.722 [2024-11-06 15:12:23.985843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.999 [2024-11-06 15:12:24.122834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.999 [2024-11-06 15:12:24.174824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:54.999 [2024-11-06 15:12:24.175268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.999 [2024-11-06 15:12:24.175292] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.999 [2024-11-06 15:12:24.175301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.999 [2024-11-06 15:12:24.175441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.999 [2024-11-06 15:12:24.175613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.999 [2024-11-06 15:12:24.176084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.999 [2024-11-06 15:12:24.176157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.943 15:12:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.943 15:12:24 -- common/autotest_common.sh@862 -- # return 0 00:13:55.943 15:12:24 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:55.943 [2024-11-06 15:12:25.214965] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.202 15:12:25 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:56.202 15:12:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.202 15:12:25 -- common/autotest_common.sh@10 -- # set +x 00:13:56.202 15:12:25 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:56.461 Malloc1 00:13:56.461 15:12:25 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:56.720 15:12:25 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:56.979 15:12:26 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.238 [2024-11-06 15:12:26.336627] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.238 15:12:26 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.497 15:12:26 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:57.497 15:12:26 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:57.497 15:12:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:57.497 15:12:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:57.497 15:12:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:57.497 15:12:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:57.497 15:12:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:57.497 15:12:26 -- common/autotest_common.sh@1330 -- # shift 00:13:57.497 15:12:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:57.497 15:12:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:57.497 15:12:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:57.497 15:12:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:57.497 15:12:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:57.497 15:12:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:57.497 15:12:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:57.497 15:12:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:57.497 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:57.497 fio-3.35 00:13:57.497 Starting 1 thread 00:14:00.031 00:14:00.032 test: (groupid=0, jobs=1): err= 0: pid=69577: Wed Nov 6 15:12:29 2024 00:14:00.032 read: IOPS=9901, BW=38.7MiB/s (40.6MB/s)(77.6MiB/2006msec) 00:14:00.032 slat (nsec): min=1826, max=343276, avg=2485.81, stdev=3359.97 00:14:00.032 clat (usec): min=2432, max=12146, avg=6705.74, stdev=561.26 00:14:00.032 lat (usec): min=2466, max=12148, avg=6708.22, stdev=561.13 00:14:00.032 clat percentiles (usec): 00:14:00.032 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:14:00.032 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:14:00.032 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7570], 00:14:00.032 | 99.00th=[ 7963], 99.50th=[ 8291], 99.90th=[11600], 99.95th=[11994], 00:14:00.032 | 99.99th=[12125] 00:14:00.032 bw ( KiB/s): min=38808, max=39968, per=100.00%, avg=39614.00, stdev=542.13, samples=4 00:14:00.032 iops : min= 9702, max= 9992, avg=9903.50, stdev=135.53, samples=4 00:14:00.032 write: IOPS=9923, BW=38.8MiB/s (40.6MB/s)(77.8MiB/2006msec); 0 zone resets 00:14:00.032 slat (nsec): min=1973, max=208807, avg=2610.73, stdev=2201.21 00:14:00.032 clat (usec): min=2303, max=11817, avg=6146.40, stdev=539.31 00:14:00.032 lat (usec): min=2316, max=11819, avg=6149.01, stdev=539.27 00:14:00.032 clat percentiles (usec): 00:14:00.032 | 1.00th=[ 5211], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:14:00.032 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:14:00.032 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6915], 00:14:00.032 | 99.00th=[ 7373], 99.50th=[ 8717], 99.90th=[11207], 99.95th=[11600], 00:14:00.032 | 99.99th=[11863] 00:14:00.032 bw ( KiB/s): min=39232, max=40528, per=99.94%, avg=39670.00, stdev=609.10, samples=4 00:14:00.032 iops : min= 9808, max=10132, avg=9917.50, stdev=152.27, samples=4 00:14:00.032 lat (msec) : 4=0.09%, 10=99.54%, 20=0.37% 00:14:00.032 cpu : usr=65.04%, sys=25.99%, ctx=13, majf=0, minf=5 00:14:00.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:00.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.032 issued rwts: total=19862,19907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.032 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.032 00:14:00.032 Run status group 0 (all jobs): 00:14:00.032 READ: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=77.6MiB (81.4MB), run=2006-2006msec 00:14:00.032 WRITE: bw=38.8MiB/s (40.6MB/s), 38.8MiB/s-38.8MiB/s (40.6MB/s-40.6MB/s), io=77.8MiB (81.5MB), run=2006-2006msec 00:14:00.032 15:12:29 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:00.032 15:12:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:00.032 15:12:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:00.032 15:12:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:00.032 15:12:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:00.032 15:12:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:00.032 15:12:29 -- common/autotest_common.sh@1330 -- # shift 00:14:00.032 15:12:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:00.032 15:12:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:00.032 15:12:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:00.032 15:12:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:00.032 15:12:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:00.032 15:12:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:00.032 15:12:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:00.032 15:12:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:00.032 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:00.032 fio-3.35 00:14:00.032 Starting 1 thread 00:14:02.567 00:14:02.567 test: (groupid=0, jobs=1): err= 0: pid=69627: Wed Nov 6 15:12:31 2024 00:14:02.567 read: IOPS=8681, BW=136MiB/s (142MB/s)(272MiB/2008msec) 00:14:02.567 slat (usec): min=2, max=134, avg= 3.84, stdev= 2.54 00:14:02.567 clat (usec): min=1815, max=16717, avg=8027.54, stdev=2437.67 00:14:02.567 lat (usec): min=1819, max=16720, avg=8031.38, stdev=2437.89 00:14:02.567 clat percentiles (usec): 00:14:02.567 | 1.00th=[ 4015], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5735], 00:14:02.567 | 30.00th=[ 6390], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:14:02.567 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[11338], 95.00th=[12387], 00:14:02.567 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15795], 99.95th=[15926], 00:14:02.567 | 99.99th=[16712] 00:14:02.567 bw ( KiB/s): min=63424, max=72768, per=49.96%, avg=69400.00, stdev=4284.76, samples=4 00:14:02.567 iops : min= 3964, max= 4548, avg=4337.50, stdev=267.80, samples=4 00:14:02.567 write: IOPS=4919, BW=76.9MiB/s (80.6MB/s)(141MiB/1833msec); 0 zone resets 00:14:02.567 slat (usec): min=31, max=269, avg=38.77, stdev= 8.68 00:14:02.567 clat (usec): min=2865, max=20275, avg=11912.52, stdev=2144.22 00:14:02.567 lat (usec): min=2898, max=20310, avg=11951.29, stdev=2145.92 00:14:02.567 clat percentiles (usec): 00:14:02.567 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10028], 00:14:02.567 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[12256], 00:14:02.567 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15008], 95.00th=[15926], 00:14:02.567 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18744], 99.95th=[19006], 00:14:02.567 | 99.99th=[20317] 00:14:02.567 bw ( KiB/s): min=65440, max=75232, per=91.65%, avg=72136.00, stdev=4512.44, samples=4 00:14:02.567 iops : min= 4090, max= 4702, avg=4508.50, stdev=282.03, samples=4 00:14:02.567 lat (msec) : 2=0.01%, 4=0.67%, 10=57.02%, 20=42.28%, 50=0.02% 00:14:02.567 cpu : usr=79.92%, sys=14.65%, ctx=5, majf=0, minf=2 00:14:02.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:02.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:02.567 issued rwts: total=17433,9017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:02.567 00:14:02.567 Run status group 0 (all jobs): 00:14:02.567 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=272MiB (286MB), run=2008-2008msec 00:14:02.567 WRITE: bw=76.9MiB/s (80.6MB/s), 76.9MiB/s-76.9MiB/s (80.6MB/s-80.6MB/s), io=141MiB (148MB), run=1833-1833msec 00:14:02.567 15:12:31 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.567 15:12:31 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:14:02.567 15:12:31 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:14:02.567 15:12:31 -- host/fio.sh@51 -- # get_nvme_bdfs 00:14:02.567 15:12:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:14:02.567 15:12:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:14:02.567 15:12:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:02.567 15:12:31 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:02.567 15:12:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:14:02.826 15:12:31 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:14:02.826 15:12:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:14:02.826 15:12:31 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:14:03.089 Nvme0n1 00:14:03.089 15:12:32 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:14:03.347 15:12:32 -- host/fio.sh@53 -- # ls_guid=95e41591-038b-4e77-8959-69435fec37a6 00:14:03.347 15:12:32 -- host/fio.sh@54 -- # get_lvs_free_mb 95e41591-038b-4e77-8959-69435fec37a6 00:14:03.347 15:12:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=95e41591-038b-4e77-8959-69435fec37a6 00:14:03.347 15:12:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:03.347 15:12:32 -- common/autotest_common.sh@1355 -- # local fc 00:14:03.347 15:12:32 -- common/autotest_common.sh@1356 -- # local cs 00:14:03.347 15:12:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:03.606 15:12:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:03.606 { 00:14:03.606 "uuid": "95e41591-038b-4e77-8959-69435fec37a6", 00:14:03.606 "name": "lvs_0", 00:14:03.606 "base_bdev": "Nvme0n1", 00:14:03.606 "total_data_clusters": 4, 00:14:03.606 "free_clusters": 4, 00:14:03.606 "block_size": 4096, 00:14:03.606 "cluster_size": 1073741824 00:14:03.606 } 00:14:03.606 ]' 00:14:03.606 15:12:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="95e41591-038b-4e77-8959-69435fec37a6") .free_clusters' 00:14:03.606 15:12:32 -- common/autotest_common.sh@1358 -- # fc=4 00:14:03.606 15:12:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="95e41591-038b-4e77-8959-69435fec37a6") .cluster_size' 00:14:03.865 4096 00:14:03.865 15:12:32 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:14:03.865 15:12:32 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:14:03.865 15:12:32 -- common/autotest_common.sh@1363 -- # echo 4096 00:14:03.865 15:12:32 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:03.865 80aa2dc9-49e1-4a78-b34f-9bb1eb50a329 00:14:03.865 15:12:33 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:04.124 15:12:33 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:04.383 15:12:33 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:04.642 15:12:33 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:04.642 15:12:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:04.642 15:12:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:04.642 15:12:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:04.642 15:12:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:04.642 15:12:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:04.642 15:12:33 -- common/autotest_common.sh@1330 -- # shift 00:14:04.642 15:12:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:04.642 15:12:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:04.642 15:12:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:04.642 15:12:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:04.642 15:12:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:04.642 15:12:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:04.642 15:12:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:04.642 15:12:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:04.901 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:04.901 fio-3.35 00:14:04.901 Starting 1 thread 00:14:07.436 00:14:07.436 test: (groupid=0, jobs=1): err= 0: pid=69738: Wed Nov 6 15:12:36 2024 00:14:07.436 read: IOPS=6502, BW=25.4MiB/s (26.6MB/s)(51.0MiB/2008msec) 00:14:07.436 slat (usec): min=2, max=350, avg= 2.71, stdev= 4.02 00:14:07.436 clat (usec): min=2955, max=18495, avg=10259.80, stdev=865.75 00:14:07.436 lat (usec): min=2965, max=18498, avg=10262.51, stdev=865.44 00:14:07.436 clat percentiles (usec): 00:14:07.436 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:14:07.436 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:14:07.436 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:14:07.436 | 99.00th=[12125], 99.50th=[12518], 99.90th=[17433], 99.95th=[17695], 00:14:07.436 | 99.99th=[18482] 00:14:07.436 bw ( KiB/s): min=24942, max=26680, per=99.80%, avg=25961.50, stdev=819.92, samples=4 00:14:07.436 iops : min= 6235, max= 6670, avg=6490.25, stdev=205.19, samples=4 00:14:07.436 write: IOPS=6510, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2008msec); 0 zone resets 00:14:07.436 slat (usec): min=2, max=226, avg= 2.84, stdev= 2.74 00:14:07.436 clat (usec): min=2421, max=16279, avg=9311.68, stdev=792.92 00:14:07.436 lat (usec): min=2435, max=16281, avg=9314.51, stdev=792.78 00:14:07.436 clat percentiles (usec): 00:14:07.436 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:14:07.436 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:14:07.436 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:14:07.436 | 99.00th=[11076], 99.50th=[11338], 99.90th=[14877], 99.95th=[15139], 00:14:07.436 | 99.99th=[16188] 00:14:07.436 bw ( KiB/s): min=25728, max=26131, per=99.92%, avg=26020.75, stdev=195.48, samples=4 00:14:07.436 iops : min= 6432, max= 6532, avg=6505.00, stdev=48.73, samples=4 00:14:07.436 lat (msec) : 4=0.06%, 10=60.06%, 20=39.88% 00:14:07.436 cpu : usr=71.70%, sys=21.77%, ctx=17, majf=0, minf=14 00:14:07.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:07.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.436 issued rwts: total=13058,13073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.436 00:14:07.436 Run status group 0 (all jobs): 00:14:07.436 READ: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=51.0MiB (53.5MB), run=2008-2008msec 00:14:07.436 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.5MB), run=2008-2008msec 00:14:07.436 15:12:36 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:07.436 15:12:36 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:07.696 15:12:36 -- host/fio.sh@64 -- # ls_nested_guid=367d2045-fcb9-4fef-9a10-1ac3ac7a4001 00:14:07.696 15:12:36 -- host/fio.sh@65 -- # get_lvs_free_mb 367d2045-fcb9-4fef-9a10-1ac3ac7a4001 00:14:07.696 15:12:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=367d2045-fcb9-4fef-9a10-1ac3ac7a4001 00:14:07.696 15:12:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:07.696 15:12:36 -- common/autotest_common.sh@1355 -- # local fc 00:14:07.696 15:12:36 -- common/autotest_common.sh@1356 -- # local cs 00:14:07.696 15:12:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:07.955 15:12:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:07.955 { 00:14:07.955 "uuid": "95e41591-038b-4e77-8959-69435fec37a6", 00:14:07.955 "name": "lvs_0", 00:14:07.955 "base_bdev": "Nvme0n1", 00:14:07.955 "total_data_clusters": 4, 00:14:07.955 "free_clusters": 0, 00:14:07.955 "block_size": 4096, 00:14:07.955 "cluster_size": 1073741824 00:14:07.955 }, 00:14:07.955 { 00:14:07.955 "uuid": "367d2045-fcb9-4fef-9a10-1ac3ac7a4001", 00:14:07.955 "name": "lvs_n_0", 00:14:07.955 "base_bdev": "80aa2dc9-49e1-4a78-b34f-9bb1eb50a329", 00:14:07.955 "total_data_clusters": 1022, 00:14:07.955 "free_clusters": 1022, 00:14:07.955 "block_size": 4096, 00:14:07.955 "cluster_size": 4194304 00:14:07.955 } 00:14:07.955 ]' 00:14:07.955 15:12:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="367d2045-fcb9-4fef-9a10-1ac3ac7a4001") .free_clusters' 00:14:07.955 15:12:37 -- common/autotest_common.sh@1358 -- # fc=1022 00:14:07.955 15:12:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="367d2045-fcb9-4fef-9a10-1ac3ac7a4001") .cluster_size' 00:14:07.955 4088 00:14:07.955 15:12:37 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:07.955 15:12:37 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:14:07.955 15:12:37 -- common/autotest_common.sh@1363 -- # echo 4088 00:14:07.955 15:12:37 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:14:08.214 2fffdd8c-6173-456c-abf5-b2124a53c167 00:14:08.214 15:12:37 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:14:08.473 15:12:37 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:14:08.732 15:12:37 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:08.991 15:12:38 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:08.991 15:12:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:08.991 15:12:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:08.991 15:12:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:08.991 15:12:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:08.991 15:12:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.991 15:12:38 -- common/autotest_common.sh@1330 -- # shift 00:14:08.991 15:12:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:08.991 15:12:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:08.991 15:12:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:08.991 15:12:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:08.991 15:12:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:08.991 15:12:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:08.991 15:12:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:08.991 15:12:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:09.249 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:09.249 fio-3.35 00:14:09.249 Starting 1 thread 00:14:11.782 00:14:11.782 test: (groupid=0, jobs=1): err= 0: pid=69816: Wed Nov 6 15:12:40 2024 00:14:11.782 read: IOPS=5892, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec) 00:14:11.782 slat (nsec): min=1924, max=360174, avg=2700.93, stdev=4391.82 00:14:11.782 clat (usec): min=3245, max=19383, avg=11353.65, stdev=950.78 00:14:11.782 lat (usec): min=3255, max=19385, avg=11356.35, stdev=950.44 00:14:11.782 clat percentiles (usec): 00:14:11.782 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:14:11.782 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:14:11.782 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:14:11.782 | 99.00th=[13435], 99.50th=[13829], 99.90th=[17957], 99.95th=[19006], 00:14:11.782 | 99.99th=[19268] 00:14:11.782 bw ( KiB/s): min=22706, max=23912, per=99.92%, avg=23552.50, stdev=568.11, samples=4 00:14:11.782 iops : min= 5676, max= 5978, avg=5888.00, stdev=142.28, samples=4 00:14:11.782 write: IOPS=5891, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec); 0 zone resets 00:14:11.782 slat (usec): min=2, max=308, avg= 2.82, stdev= 3.44 00:14:11.782 clat (usec): min=2481, max=19450, avg=10278.16, stdev=909.33 00:14:11.782 lat (usec): min=2495, max=19468, avg=10280.99, stdev=909.23 00:14:11.782 clat percentiles (usec): 00:14:11.782 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:14:11.782 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:14:11.782 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:14:11.782 | 99.00th=[12256], 99.50th=[12518], 99.90th=[16909], 99.95th=[17957], 00:14:11.782 | 99.99th=[19530] 00:14:11.782 bw ( KiB/s): min=23440, max=23592, per=99.81%, avg=23520.00, stdev=62.99, samples=4 00:14:11.782 iops : min= 5860, max= 5898, avg=5880.00, stdev=15.75, samples=4 00:14:11.782 lat (msec) : 4=0.05%, 10=21.13%, 20=78.82% 00:14:11.782 cpu : usr=75.75%, sys=18.63%, ctx=6, majf=0, minf=14 00:14:11.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:11.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.782 issued rwts: total=11839,11836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.782 00:14:11.782 Run status group 0 (all jobs): 00:14:11.782 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2009-2009msec 00:14:11.782 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2009-2009msec 00:14:11.782 15:12:40 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:11.782 15:12:40 -- host/fio.sh@74 -- # sync 00:14:11.782 15:12:40 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:14:12.041 15:12:41 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:12.300 15:12:41 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:14:12.562 15:12:41 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:12.841 15:12:41 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:13.820 15:12:42 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:13.820 15:12:42 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:13.820 15:12:42 -- host/fio.sh@86 -- # nvmftestfini 00:14:13.820 15:12:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:13.820 15:12:42 -- nvmf/common.sh@116 -- # sync 00:14:13.820 15:12:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:13.820 15:12:42 -- nvmf/common.sh@119 -- # set +e 00:14:13.820 15:12:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:13.820 15:12:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:13.820 rmmod nvme_tcp 00:14:13.820 rmmod nvme_fabrics 00:14:13.820 rmmod nvme_keyring 00:14:13.820 15:12:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:13.820 15:12:42 -- nvmf/common.sh@123 -- # set -e 00:14:13.820 15:12:42 -- nvmf/common.sh@124 -- # return 0 00:14:13.820 15:12:42 -- nvmf/common.sh@477 -- # '[' -n 69499 ']' 00:14:13.820 15:12:42 -- nvmf/common.sh@478 -- # killprocess 69499 00:14:13.820 15:12:42 -- common/autotest_common.sh@936 -- # '[' -z 69499 ']' 00:14:13.820 15:12:42 -- common/autotest_common.sh@940 -- # kill -0 69499 00:14:13.820 15:12:42 -- common/autotest_common.sh@941 -- # uname 00:14:13.820 15:12:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.820 15:12:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69499 00:14:13.820 killing process with pid 69499 00:14:13.820 15:12:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:13.820 15:12:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:13.820 15:12:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69499' 00:14:13.820 15:12:42 -- common/autotest_common.sh@955 -- # kill 69499 00:14:13.820 15:12:42 -- common/autotest_common.sh@960 -- # wait 69499 00:14:14.079 15:12:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.079 15:12:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:14.079 15:12:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:14.079 15:12:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.079 15:12:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:14.079 15:12:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.079 15:12:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.079 15:12:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.079 15:12:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:14.079 00:14:14.080 real 0m19.793s 00:14:14.080 user 1m27.248s 00:14:14.080 sys 0m4.294s 00:14:14.080 15:12:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:14.080 15:12:43 -- common/autotest_common.sh@10 -- # set +x 00:14:14.080 ************************************ 00:14:14.080 END TEST nvmf_fio_host 00:14:14.080 ************************************ 00:14:14.080 15:12:43 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:14.080 15:12:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:14.080 15:12:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.080 15:12:43 -- common/autotest_common.sh@10 -- # set +x 00:14:14.080 ************************************ 00:14:14.080 START TEST nvmf_failover 00:14:14.080 ************************************ 00:14:14.080 15:12:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:14.080 * Looking for test storage... 00:14:14.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:14.080 15:12:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:14.080 15:12:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:14.080 15:12:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:14.080 15:12:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:14.080 15:12:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:14.080 15:12:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:14.080 15:12:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:14.080 15:12:43 -- scripts/common.sh@335 -- # IFS=.-: 00:14:14.080 15:12:43 -- scripts/common.sh@335 -- # read -ra ver1 00:14:14.080 15:12:43 -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.080 15:12:43 -- scripts/common.sh@336 -- # read -ra ver2 00:14:14.080 15:12:43 -- scripts/common.sh@337 -- # local 'op=<' 00:14:14.080 15:12:43 -- scripts/common.sh@339 -- # ver1_l=2 00:14:14.080 15:12:43 -- scripts/common.sh@340 -- # ver2_l=1 00:14:14.080 15:12:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:14.080 15:12:43 -- scripts/common.sh@343 -- # case "$op" in 00:14:14.080 15:12:43 -- scripts/common.sh@344 -- # : 1 00:14:14.080 15:12:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:14.080 15:12:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.080 15:12:43 -- scripts/common.sh@364 -- # decimal 1 00:14:14.080 15:12:43 -- scripts/common.sh@352 -- # local d=1 00:14:14.080 15:12:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.080 15:12:43 -- scripts/common.sh@354 -- # echo 1 00:14:14.080 15:12:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:14.080 15:12:43 -- scripts/common.sh@365 -- # decimal 2 00:14:14.080 15:12:43 -- scripts/common.sh@352 -- # local d=2 00:14:14.080 15:12:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.080 15:12:43 -- scripts/common.sh@354 -- # echo 2 00:14:14.080 15:12:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:14.080 15:12:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:14.080 15:12:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:14.080 15:12:43 -- scripts/common.sh@367 -- # return 0 00:14:14.080 15:12:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.080 15:12:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.080 --rc genhtml_branch_coverage=1 00:14:14.080 --rc genhtml_function_coverage=1 00:14:14.080 --rc genhtml_legend=1 00:14:14.080 --rc geninfo_all_blocks=1 00:14:14.080 --rc geninfo_unexecuted_blocks=1 00:14:14.080 00:14:14.080 ' 00:14:14.080 15:12:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.080 --rc genhtml_branch_coverage=1 00:14:14.080 --rc genhtml_function_coverage=1 00:14:14.080 --rc genhtml_legend=1 00:14:14.080 --rc geninfo_all_blocks=1 00:14:14.080 --rc geninfo_unexecuted_blocks=1 00:14:14.080 00:14:14.080 ' 00:14:14.080 15:12:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.080 --rc genhtml_branch_coverage=1 00:14:14.080 --rc genhtml_function_coverage=1 00:14:14.080 --rc genhtml_legend=1 00:14:14.080 --rc geninfo_all_blocks=1 00:14:14.080 --rc geninfo_unexecuted_blocks=1 00:14:14.080 00:14:14.080 ' 00:14:14.080 15:12:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.080 --rc genhtml_branch_coverage=1 00:14:14.080 --rc genhtml_function_coverage=1 00:14:14.080 --rc genhtml_legend=1 00:14:14.080 --rc geninfo_all_blocks=1 00:14:14.080 --rc geninfo_unexecuted_blocks=1 00:14:14.080 00:14:14.080 ' 00:14:14.080 15:12:43 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.080 15:12:43 -- nvmf/common.sh@7 -- # uname -s 00:14:14.339 15:12:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.339 15:12:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.339 15:12:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.339 15:12:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.339 15:12:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.339 15:12:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.339 15:12:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.339 15:12:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.339 15:12:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.339 15:12:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.339 15:12:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:14:14.339 15:12:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:14:14.339 15:12:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.339 15:12:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.339 15:12:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.339 15:12:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.339 15:12:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.339 15:12:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.339 15:12:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.339 15:12:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.339 15:12:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.339 15:12:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.339 15:12:43 -- paths/export.sh@5 -- # export PATH 00:14:14.339 15:12:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.339 15:12:43 -- nvmf/common.sh@46 -- # : 0 00:14:14.339 15:12:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.339 15:12:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.339 15:12:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.339 15:12:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.339 15:12:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.339 15:12:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.339 15:12:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.339 15:12:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.339 15:12:43 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:14.339 15:12:43 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:14.340 15:12:43 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.340 15:12:43 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.340 15:12:43 -- host/failover.sh@18 -- # nvmftestinit 00:14:14.340 15:12:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:14.340 15:12:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.340 15:12:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.340 15:12:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.340 15:12:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.340 15:12:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.340 15:12:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.340 15:12:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.340 15:12:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:14.340 15:12:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:14.340 15:12:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:14.340 15:12:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:14.340 15:12:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:14.340 15:12:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:14.340 15:12:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.340 15:12:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.340 15:12:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:14.340 15:12:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:14.340 15:12:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.340 15:12:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.340 15:12:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.340 15:12:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.340 15:12:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.340 15:12:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.340 15:12:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.340 15:12:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.340 15:12:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:14.340 15:12:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:14.340 Cannot find device "nvmf_tgt_br" 00:14:14.340 15:12:43 -- nvmf/common.sh@154 -- # true 00:14:14.340 15:12:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.340 Cannot find device "nvmf_tgt_br2" 00:14:14.340 15:12:43 -- nvmf/common.sh@155 -- # true 00:14:14.340 15:12:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:14.340 15:12:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:14.340 Cannot find device "nvmf_tgt_br" 00:14:14.340 15:12:43 -- nvmf/common.sh@157 -- # true 00:14:14.340 15:12:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:14.340 Cannot find device "nvmf_tgt_br2" 00:14:14.340 15:12:43 -- nvmf/common.sh@158 -- # true 00:14:14.340 15:12:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:14.340 15:12:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:14.340 15:12:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.340 15:12:43 -- nvmf/common.sh@161 -- # true 00:14:14.340 15:12:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.340 15:12:43 -- nvmf/common.sh@162 -- # true 00:14:14.340 15:12:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.340 15:12:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.340 15:12:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.340 15:12:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.340 15:12:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.340 15:12:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.340 15:12:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.340 15:12:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:14.340 15:12:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:14.340 15:12:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:14.340 15:12:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:14.340 15:12:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:14.340 15:12:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:14.340 15:12:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.599 15:12:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.599 15:12:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.599 15:12:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:14.599 15:12:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:14.599 15:12:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.599 15:12:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.600 15:12:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.600 15:12:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.600 15:12:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.600 15:12:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:14.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:14.600 00:14:14.600 --- 10.0.0.2 ping statistics --- 00:14:14.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.600 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:14.600 15:12:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:14.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:14.600 00:14:14.600 --- 10.0.0.3 ping statistics --- 00:14:14.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.600 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:14.600 15:12:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:14.600 00:14:14.600 --- 10.0.0.1 ping statistics --- 00:14:14.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.600 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:14.600 15:12:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.600 15:12:43 -- nvmf/common.sh@421 -- # return 0 00:14:14.600 15:12:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:14.600 15:12:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.600 15:12:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:14.600 15:12:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:14.600 15:12:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.600 15:12:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:14.600 15:12:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:14.600 15:12:43 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:14.600 15:12:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:14.600 15:12:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.600 15:12:43 -- common/autotest_common.sh@10 -- # set +x 00:14:14.600 15:12:43 -- nvmf/common.sh@469 -- # nvmfpid=70059 00:14:14.600 15:12:43 -- nvmf/common.sh@470 -- # waitforlisten 70059 00:14:14.600 15:12:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.600 15:12:43 -- common/autotest_common.sh@829 -- # '[' -z 70059 ']' 00:14:14.600 15:12:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.600 15:12:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.600 15:12:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.600 15:12:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.600 15:12:43 -- common/autotest_common.sh@10 -- # set +x 00:14:14.600 [2024-11-06 15:12:43.788252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:14.600 [2024-11-06 15:12:43.788357] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.859 [2024-11-06 15:12:43.923068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.859 [2024-11-06 15:12:43.974146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.859 [2024-11-06 15:12:43.974303] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.859 [2024-11-06 15:12:43.974314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.859 [2024-11-06 15:12:43.974321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.859 [2024-11-06 15:12:43.974863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.859 [2024-11-06 15:12:43.974958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.859 [2024-11-06 15:12:43.974962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.796 15:12:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.796 15:12:44 -- common/autotest_common.sh@862 -- # return 0 00:14:15.796 15:12:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:15.796 15:12:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.796 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:14:15.796 15:12:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.796 15:12:44 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:15.796 [2024-11-06 15:12:45.048509] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.796 15:12:45 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:16.055 Malloc0 00:14:16.055 15:12:45 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:16.622 15:12:45 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:16.622 15:12:45 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.881 [2024-11-06 15:12:46.043023] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.881 15:12:46 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:17.140 [2024-11-06 15:12:46.263183] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:17.140 15:12:46 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:17.399 [2024-11-06 15:12:46.495420] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:17.399 15:12:46 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:17.399 15:12:46 -- host/failover.sh@31 -- # bdevperf_pid=70122 00:14:17.399 15:12:46 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.399 15:12:46 -- host/failover.sh@34 -- # waitforlisten 70122 /var/tmp/bdevperf.sock 00:14:17.399 15:12:46 -- common/autotest_common.sh@829 -- # '[' -z 70122 ']' 00:14:17.399 15:12:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.399 15:12:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.399 15:12:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.399 15:12:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.399 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:14:18.336 15:12:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.336 15:12:47 -- common/autotest_common.sh@862 -- # return 0 00:14:18.336 15:12:47 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:18.904 NVMe0n1 00:14:18.904 15:12:47 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:18.904 00:14:19.162 15:12:48 -- host/failover.sh@39 -- # run_test_pid=70140 00:14:19.162 15:12:48 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.162 15:12:48 -- host/failover.sh@41 -- # sleep 1 00:14:20.097 15:12:49 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.356 [2024-11-06 15:12:49.444476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 [2024-11-06 15:12:49.444625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479d00 is same with the state(5) to be set 00:14:20.356 15:12:49 -- host/failover.sh@45 -- # sleep 3 00:14:23.643 15:12:52 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:23.643 00:14:23.643 15:12:52 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:23.902 [2024-11-06 15:12:53.048375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 [2024-11-06 15:12:53.048500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478d70 is same with the state(5) to be set 00:14:23.902 15:12:53 -- host/failover.sh@50 -- # sleep 3 00:14:27.189 15:12:56 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.189 [2024-11-06 15:12:56.323815] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.189 15:12:56 -- host/failover.sh@55 -- # sleep 1 00:14:28.127 15:12:57 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:28.695 [2024-11-06 15:12:57.666545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 [2024-11-06 15:12:57.666721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147a060 is same with the state(5) to be set 00:14:28.695 15:12:57 -- host/failover.sh@59 -- # wait 70140 00:14:35.266 0 00:14:35.266 15:13:03 -- host/failover.sh@61 -- # killprocess 70122 00:14:35.266 15:13:03 -- common/autotest_common.sh@936 -- # '[' -z 70122 ']' 00:14:35.266 15:13:03 -- common/autotest_common.sh@940 -- # kill -0 70122 00:14:35.266 15:13:03 -- common/autotest_common.sh@941 -- # uname 00:14:35.266 15:13:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.266 15:13:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70122 00:14:35.266 15:13:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:35.266 15:13:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:35.266 killing process with pid 70122 00:14:35.266 15:13:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70122' 00:14:35.266 15:13:03 -- common/autotest_common.sh@955 -- # kill 70122 00:14:35.266 15:13:03 -- common/autotest_common.sh@960 -- # wait 70122 00:14:35.266 15:13:03 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:35.266 [2024-11-06 15:12:46.552520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:35.266 [2024-11-06 15:12:46.552613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70122 ] 00:14:35.266 [2024-11-06 15:12:46.685958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.266 [2024-11-06 15:12:46.738443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.266 Running I/O for 15 seconds... 00:14:35.266 [2024-11-06 15:12:49.444710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.266 [2024-11-06 15:12:49.444762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.444795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.444810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.444826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.444839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.444868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.444898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.444914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.444928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.444944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.444957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.444973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.444987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.267 [2024-11-06 15:12:49.445831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.267 [2024-11-06 15:12:49.445877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.267 [2024-11-06 15:12:49.445890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.445915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.445929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.445945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.445959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.445989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.268 [2024-11-06 15:12:49.446931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.268 [2024-11-06 15:12:49.446946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.268 [2024-11-06 15:12:49.446959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.446973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.446986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.447961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.447976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.447988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.448002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.448015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.448029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.448042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.448056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.269 [2024-11-06 15:12:49.448068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.448083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.269 [2024-11-06 15:12:49.448095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.269 [2024-11-06 15:12:49.448109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.270 [2024-11-06 15:12:49.448486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:49.448719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914970 is same with the state(5) to be set 00:14:35.270 [2024-11-06 15:12:49.448750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:35.270 [2024-11-06 15:12:49.448760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:35.270 [2024-11-06 15:12:49.448770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129176 len:8 PRP1 0x0 PRP2 0x0 00:14:35.270 [2024-11-06 15:12:49.448783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448827] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x914970 was disconnected and freed. reset controller. 00:14:35.270 [2024-11-06 15:12:49.448843] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:35.270 [2024-11-06 15:12:49.448893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:49.448914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:49.448942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:49.448970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.448983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:49.448996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:49.449009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:35.270 [2024-11-06 15:12:49.451430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:35.270 [2024-11-06 15:12:49.451480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1690 (9): Bad file descriptor 00:14:35.270 [2024-11-06 15:12:49.482079] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:35.270 [2024-11-06 15:12:53.047957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:53.048024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:53.048086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:53.048102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:53.048116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:53.048128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:53.048141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.270 [2024-11-06 15:12:53.048154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:53.048166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1690 is same with the state(5) to be set 00:14:35.270 [2024-11-06 15:12:53.048558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:53.048584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:53.048609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.270 [2024-11-06 15:12:53.048634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.270 [2024-11-06 15:12:53.048686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.048738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.048779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.048808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.048839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.048892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.048923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.048949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.048978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.271 [2024-11-06 15:12:53.049237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.271 [2024-11-06 15:12:53.049835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.049918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.271 [2024-11-06 15:12:53.049947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.271 [2024-11-06 15:12:53.049974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.049989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.271 [2024-11-06 15:12:53.050001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.050016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.271 [2024-11-06 15:12:53.050030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.050045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.050058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.271 [2024-11-06 15:12:53.050072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.271 [2024-11-06 15:12:53.050107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.050188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.050525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.050608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.050635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.050858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.050974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.050989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.051043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.272 [2024-11-06 15:12:53.051068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.272 [2024-11-06 15:12:53.051319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.272 [2024-11-06 15:12:53.051335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.051915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.051975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.051988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.052304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.052464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.273 [2024-11-06 15:12:53.052573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.273 [2024-11-06 15:12:53.052603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.273 [2024-11-06 15:12:53.052619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:53.052632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:53.052662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:53.052761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:53.052820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:53.052865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.052974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.052988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.053006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.053020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.053035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.053048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.053063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:53.053077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.053091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9450 is same with the state(5) to be set 00:14:35.274 [2024-11-06 15:12:53.053108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:35.274 [2024-11-06 15:12:53.053118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:35.274 [2024-11-06 15:12:53.053128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:8 PRP1 0x0 PRP2 0x0 00:14:35.274 [2024-11-06 15:12:53.053142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:53.053187] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f9450 was disconnected and freed. reset controller. 00:14:35.274 [2024-11-06 15:12:53.053204] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:35.274 [2024-11-06 15:12:53.053219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:35.274 [2024-11-06 15:12:53.056058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:35.274 [2024-11-06 15:12:53.056100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1690 (9): Bad file descriptor 00:14:35.274 [2024-11-06 15:12:53.089996] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:35.274 [2024-11-06 15:12:57.666794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.666849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.666876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.666892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.666908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.666921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.666936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.666949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.666984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.666999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:57.667330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:57.667359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.274 [2024-11-06 15:12:57.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:57.667488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.274 [2024-11-06 15:12:57.667503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.274 [2024-11-06 15:12:57.667517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.667614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.667641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.667779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.667840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.667982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.667996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.275 [2024-11-06 15:12:57.668570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.275 [2024-11-06 15:12:57.668596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.275 [2024-11-06 15:12:57.668610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.668855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.668982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.668995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.276 [2024-11-06 15:12:57.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.276 [2024-11-06 15:12:57.669545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.276 [2024-11-06 15:12:57.669558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.669619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.669721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.669749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.669776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.669967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.669980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.670101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.670265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.670292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:35.277 [2024-11-06 15:12:57.670318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.277 [2024-11-06 15:12:57.670598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.277 [2024-11-06 15:12:57.670612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9268e0 is same with the state(5) to be set 00:14:35.277 [2024-11-06 15:12:57.670628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:35.277 [2024-11-06 15:12:57.670637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:35.278 [2024-11-06 15:12:57.670648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10456 len:8 PRP1 0x0 PRP2 0x0 00:14:35.278 [2024-11-06 15:12:57.670660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.278 [2024-11-06 15:12:57.670730] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9268e0 was disconnected and freed. reset controller. 00:14:35.278 [2024-11-06 15:12:57.670748] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:35.278 [2024-11-06 15:12:57.670809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.278 [2024-11-06 15:12:57.670830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.278 [2024-11-06 15:12:57.670846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.278 [2024-11-06 15:12:57.670859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.278 [2024-11-06 15:12:57.670872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.278 [2024-11-06 15:12:57.670885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.278 [2024-11-06 15:12:57.670898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.278 [2024-11-06 15:12:57.670911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.278 [2024-11-06 15:12:57.670924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:35.278 [2024-11-06 15:12:57.673264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:35.278 [2024-11-06 15:12:57.673301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1690 (9): Bad file descriptor 00:14:35.278 [2024-11-06 15:12:57.706176] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:35.278 00:14:35.278 Latency(us) 00:14:35.278 [2024-11-06T15:13:04.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.278 [2024-11-06T15:13:04.553Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:35.278 Verification LBA range: start 0x0 length 0x4000 00:14:35.278 NVMe0n1 : 15.01 13764.05 53.77 328.19 0.00 9065.44 521.31 17873.45 00:14:35.278 [2024-11-06T15:13:04.553Z] =================================================================================================================== 00:14:35.278 [2024-11-06T15:13:04.553Z] Total : 13764.05 53.77 328.19 0.00 9065.44 521.31 17873.45 00:14:35.278 Received shutdown signal, test time was about 15.000000 seconds 00:14:35.278 00:14:35.278 Latency(us) 00:14:35.278 [2024-11-06T15:13:04.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.278 [2024-11-06T15:13:04.553Z] =================================================================================================================== 00:14:35.278 [2024-11-06T15:13:04.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.278 15:13:03 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:35.278 15:13:03 -- host/failover.sh@65 -- # count=3 00:14:35.278 15:13:03 -- host/failover.sh@67 -- # (( count != 3 )) 00:14:35.278 15:13:03 -- host/failover.sh@73 -- # bdevperf_pid=70323 00:14:35.278 15:13:03 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:35.278 15:13:03 -- host/failover.sh@75 -- # waitforlisten 70323 /var/tmp/bdevperf.sock 00:14:35.278 15:13:03 -- common/autotest_common.sh@829 -- # '[' -z 70323 ']' 00:14:35.278 15:13:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.278 15:13:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.278 15:13:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.278 15:13:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.278 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:35.537 15:13:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.537 15:13:04 -- common/autotest_common.sh@862 -- # return 0 00:14:35.537 15:13:04 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:35.795 [2024-11-06 15:13:04.824066] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:35.795 15:13:04 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:35.795 [2024-11-06 15:13:05.056297] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:36.054 15:13:05 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:36.313 NVMe0n1 00:14:36.313 15:13:05 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:36.572 00:14:36.572 15:13:05 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:36.831 00:14:36.831 15:13:06 -- host/failover.sh@82 -- # grep -q NVMe0 00:14:36.831 15:13:06 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:37.088 15:13:06 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:37.347 15:13:06 -- host/failover.sh@87 -- # sleep 3 00:14:40.636 15:13:09 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:40.636 15:13:09 -- host/failover.sh@88 -- # grep -q NVMe0 00:14:40.636 15:13:09 -- host/failover.sh@90 -- # run_test_pid=70401 00:14:40.636 15:13:09 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:40.636 15:13:09 -- host/failover.sh@92 -- # wait 70401 00:14:42.013 0 00:14:42.013 15:13:11 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:42.013 [2024-11-06 15:13:03.614412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:42.013 [2024-11-06 15:13:03.614562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70323 ] 00:14:42.013 [2024-11-06 15:13:03.751637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.013 [2024-11-06 15:13:03.829837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.013 [2024-11-06 15:13:06.580702] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:42.013 [2024-11-06 15:13:06.580856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.013 [2024-11-06 15:13:06.580883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.013 [2024-11-06 15:13:06.580901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.013 [2024-11-06 15:13:06.580915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.013 [2024-11-06 15:13:06.580929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.013 [2024-11-06 15:13:06.580941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.013 [2024-11-06 15:13:06.580954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.013 [2024-11-06 15:13:06.580967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.013 [2024-11-06 15:13:06.580980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:42.013 [2024-11-06 15:13:06.581046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:42.013 [2024-11-06 15:13:06.581077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55690 (9): Bad file descriptor 00:14:42.013 [2024-11-06 15:13:06.584426] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:42.013 Running I/O for 1 seconds... 00:14:42.013 00:14:42.013 Latency(us) 00:14:42.013 [2024-11-06T15:13:11.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.013 [2024-11-06T15:13:11.288Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:42.013 Verification LBA range: start 0x0 length 0x4000 00:14:42.013 NVMe0n1 : 1.01 14012.99 54.74 0.00 0.00 9088.19 1027.72 14477.50 00:14:42.013 [2024-11-06T15:13:11.288Z] =================================================================================================================== 00:14:42.013 [2024-11-06T15:13:11.288Z] Total : 14012.99 54.74 0.00 0.00 9088.19 1027.72 14477.50 00:14:42.013 15:13:11 -- host/failover.sh@95 -- # grep -q NVMe0 00:14:42.013 15:13:11 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:42.272 15:13:11 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:42.530 15:13:11 -- host/failover.sh@99 -- # grep -q NVMe0 00:14:42.530 15:13:11 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:42.789 15:13:11 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:43.048 15:13:12 -- host/failover.sh@101 -- # sleep 3 00:14:46.335 15:13:15 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:46.335 15:13:15 -- host/failover.sh@103 -- # grep -q NVMe0 00:14:46.335 15:13:15 -- host/failover.sh@108 -- # killprocess 70323 00:14:46.335 15:13:15 -- common/autotest_common.sh@936 -- # '[' -z 70323 ']' 00:14:46.335 15:13:15 -- common/autotest_common.sh@940 -- # kill -0 70323 00:14:46.335 15:13:15 -- common/autotest_common.sh@941 -- # uname 00:14:46.335 15:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.335 15:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70323 00:14:46.335 killing process with pid 70323 00:14:46.335 15:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.335 15:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.335 15:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70323' 00:14:46.335 15:13:15 -- common/autotest_common.sh@955 -- # kill 70323 00:14:46.335 15:13:15 -- common/autotest_common.sh@960 -- # wait 70323 00:14:46.335 15:13:15 -- host/failover.sh@110 -- # sync 00:14:46.594 15:13:15 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.594 15:13:15 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:46.594 15:13:15 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:46.594 15:13:15 -- host/failover.sh@116 -- # nvmftestfini 00:14:46.594 15:13:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:46.594 15:13:15 -- nvmf/common.sh@116 -- # sync 00:14:46.853 15:13:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:46.853 15:13:15 -- nvmf/common.sh@119 -- # set +e 00:14:46.853 15:13:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:46.853 15:13:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:46.853 rmmod nvme_tcp 00:14:46.853 rmmod nvme_fabrics 00:14:46.853 rmmod nvme_keyring 00:14:46.853 15:13:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:46.853 15:13:15 -- nvmf/common.sh@123 -- # set -e 00:14:46.853 15:13:15 -- nvmf/common.sh@124 -- # return 0 00:14:46.853 15:13:15 -- nvmf/common.sh@477 -- # '[' -n 70059 ']' 00:14:46.853 15:13:15 -- nvmf/common.sh@478 -- # killprocess 70059 00:14:46.853 15:13:15 -- common/autotest_common.sh@936 -- # '[' -z 70059 ']' 00:14:46.853 15:13:15 -- common/autotest_common.sh@940 -- # kill -0 70059 00:14:46.853 15:13:15 -- common/autotest_common.sh@941 -- # uname 00:14:46.853 15:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.853 15:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70059 00:14:46.853 killing process with pid 70059 00:14:46.853 15:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:46.853 15:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:46.853 15:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70059' 00:14:46.853 15:13:15 -- common/autotest_common.sh@955 -- # kill 70059 00:14:46.853 15:13:15 -- common/autotest_common.sh@960 -- # wait 70059 00:14:47.113 15:13:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:47.113 15:13:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:47.113 15:13:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:47.113 15:13:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.113 15:13:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:47.113 15:13:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.113 15:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.113 15:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.113 15:13:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:47.113 ************************************ 00:14:47.113 END TEST nvmf_failover 00:14:47.113 ************************************ 00:14:47.113 00:14:47.113 real 0m33.003s 00:14:47.113 user 2m8.419s 00:14:47.113 sys 0m5.382s 00:14:47.113 15:13:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:47.113 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:47.113 15:13:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:47.113 15:13:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:47.113 15:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.113 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:47.113 ************************************ 00:14:47.113 START TEST nvmf_discovery 00:14:47.113 ************************************ 00:14:47.113 15:13:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:47.113 * Looking for test storage... 00:14:47.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:47.113 15:13:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:47.113 15:13:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:47.113 15:13:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:47.372 15:13:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:47.372 15:13:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:47.372 15:13:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:47.372 15:13:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:47.372 15:13:16 -- scripts/common.sh@335 -- # IFS=.-: 00:14:47.372 15:13:16 -- scripts/common.sh@335 -- # read -ra ver1 00:14:47.372 15:13:16 -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.372 15:13:16 -- scripts/common.sh@336 -- # read -ra ver2 00:14:47.372 15:13:16 -- scripts/common.sh@337 -- # local 'op=<' 00:14:47.372 15:13:16 -- scripts/common.sh@339 -- # ver1_l=2 00:14:47.372 15:13:16 -- scripts/common.sh@340 -- # ver2_l=1 00:14:47.372 15:13:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:47.372 15:13:16 -- scripts/common.sh@343 -- # case "$op" in 00:14:47.372 15:13:16 -- scripts/common.sh@344 -- # : 1 00:14:47.372 15:13:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:47.372 15:13:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.372 15:13:16 -- scripts/common.sh@364 -- # decimal 1 00:14:47.372 15:13:16 -- scripts/common.sh@352 -- # local d=1 00:14:47.372 15:13:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.372 15:13:16 -- scripts/common.sh@354 -- # echo 1 00:14:47.372 15:13:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:47.372 15:13:16 -- scripts/common.sh@365 -- # decimal 2 00:14:47.372 15:13:16 -- scripts/common.sh@352 -- # local d=2 00:14:47.372 15:13:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.372 15:13:16 -- scripts/common.sh@354 -- # echo 2 00:14:47.372 15:13:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:47.372 15:13:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:47.372 15:13:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:47.372 15:13:16 -- scripts/common.sh@367 -- # return 0 00:14:47.372 15:13:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.372 15:13:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.372 --rc genhtml_branch_coverage=1 00:14:47.372 --rc genhtml_function_coverage=1 00:14:47.372 --rc genhtml_legend=1 00:14:47.372 --rc geninfo_all_blocks=1 00:14:47.372 --rc geninfo_unexecuted_blocks=1 00:14:47.372 00:14:47.372 ' 00:14:47.372 15:13:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.372 --rc genhtml_branch_coverage=1 00:14:47.372 --rc genhtml_function_coverage=1 00:14:47.372 --rc genhtml_legend=1 00:14:47.372 --rc geninfo_all_blocks=1 00:14:47.372 --rc geninfo_unexecuted_blocks=1 00:14:47.372 00:14:47.372 ' 00:14:47.372 15:13:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.372 --rc genhtml_branch_coverage=1 00:14:47.372 --rc genhtml_function_coverage=1 00:14:47.372 --rc genhtml_legend=1 00:14:47.372 --rc geninfo_all_blocks=1 00:14:47.372 --rc geninfo_unexecuted_blocks=1 00:14:47.372 00:14:47.372 ' 00:14:47.372 15:13:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.372 --rc genhtml_branch_coverage=1 00:14:47.372 --rc genhtml_function_coverage=1 00:14:47.372 --rc genhtml_legend=1 00:14:47.372 --rc geninfo_all_blocks=1 00:14:47.372 --rc geninfo_unexecuted_blocks=1 00:14:47.372 00:14:47.372 ' 00:14:47.372 15:13:16 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.372 15:13:16 -- nvmf/common.sh@7 -- # uname -s 00:14:47.372 15:13:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.372 15:13:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.372 15:13:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.372 15:13:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.372 15:13:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.372 15:13:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.372 15:13:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.372 15:13:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.372 15:13:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.372 15:13:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.372 15:13:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:14:47.372 15:13:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:14:47.372 15:13:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.372 15:13:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.372 15:13:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.372 15:13:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.372 15:13:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.372 15:13:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.372 15:13:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.372 15:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.372 15:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.372 15:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.372 15:13:16 -- paths/export.sh@5 -- # export PATH 00:14:47.372 15:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.372 15:13:16 -- nvmf/common.sh@46 -- # : 0 00:14:47.373 15:13:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.373 15:13:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.373 15:13:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.373 15:13:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.373 15:13:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.373 15:13:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.373 15:13:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.373 15:13:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.373 15:13:16 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:47.373 15:13:16 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:47.373 15:13:16 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:47.373 15:13:16 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:47.373 15:13:16 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:47.373 15:13:16 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:47.373 15:13:16 -- host/discovery.sh@25 -- # nvmftestinit 00:14:47.373 15:13:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.373 15:13:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.373 15:13:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.373 15:13:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.373 15:13:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.373 15:13:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.373 15:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.373 15:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.373 15:13:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:47.373 15:13:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:47.373 15:13:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:47.373 15:13:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:47.373 15:13:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:47.373 15:13:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:47.373 15:13:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.373 15:13:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.373 15:13:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:47.373 15:13:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:47.373 15:13:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.373 15:13:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.373 15:13:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.373 15:13:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.373 15:13:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.373 15:13:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.373 15:13:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.373 15:13:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.373 15:13:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:47.373 15:13:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:47.373 Cannot find device "nvmf_tgt_br" 00:14:47.373 15:13:16 -- nvmf/common.sh@154 -- # true 00:14:47.373 15:13:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.373 Cannot find device "nvmf_tgt_br2" 00:14:47.373 15:13:16 -- nvmf/common.sh@155 -- # true 00:14:47.373 15:13:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:47.373 15:13:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:47.373 Cannot find device "nvmf_tgt_br" 00:14:47.373 15:13:16 -- nvmf/common.sh@157 -- # true 00:14:47.373 15:13:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:47.373 Cannot find device "nvmf_tgt_br2" 00:14:47.373 15:13:16 -- nvmf/common.sh@158 -- # true 00:14:47.373 15:13:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:47.373 15:13:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:47.373 15:13:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.373 15:13:16 -- nvmf/common.sh@161 -- # true 00:14:47.373 15:13:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.373 15:13:16 -- nvmf/common.sh@162 -- # true 00:14:47.373 15:13:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.373 15:13:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.373 15:13:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.373 15:13:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.632 15:13:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.632 15:13:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.632 15:13:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.632 15:13:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:47.632 15:13:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:47.632 15:13:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:47.632 15:13:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:47.632 15:13:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:47.632 15:13:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:47.632 15:13:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.632 15:13:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.632 15:13:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.632 15:13:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:47.632 15:13:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:47.632 15:13:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.632 15:13:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.632 15:13:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:47.632 15:13:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:47.632 15:13:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.632 15:13:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:47.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:47.632 00:14:47.632 --- 10.0.0.2 ping statistics --- 00:14:47.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.632 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:47.632 15:13:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:47.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:14:47.632 00:14:47.632 --- 10.0.0.3 ping statistics --- 00:14:47.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.632 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:47.632 15:13:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:47.632 00:14:47.632 --- 10.0.0.1 ping statistics --- 00:14:47.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.632 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:47.632 15:13:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.632 15:13:16 -- nvmf/common.sh@421 -- # return 0 00:14:47.632 15:13:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:47.632 15:13:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.632 15:13:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:47.632 15:13:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:47.633 15:13:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.633 15:13:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:47.633 15:13:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:47.633 15:13:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:47.633 15:13:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:47.633 15:13:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.633 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:47.633 15:13:16 -- nvmf/common.sh@469 -- # nvmfpid=70679 00:14:47.633 15:13:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:47.633 15:13:16 -- nvmf/common.sh@470 -- # waitforlisten 70679 00:14:47.633 15:13:16 -- common/autotest_common.sh@829 -- # '[' -z 70679 ']' 00:14:47.633 15:13:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.633 15:13:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.633 15:13:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.633 15:13:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.633 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:47.633 [2024-11-06 15:13:16.891898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:47.633 [2024-11-06 15:13:16.892235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.891 [2024-11-06 15:13:17.034419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.891 [2024-11-06 15:13:17.082830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.891 [2024-11-06 15:13:17.083297] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.891 [2024-11-06 15:13:17.083326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.891 [2024-11-06 15:13:17.083336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.891 [2024-11-06 15:13:17.083371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.826 15:13:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.826 15:13:17 -- common/autotest_common.sh@862 -- # return 0 00:14:48.826 15:13:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.826 15:13:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 15:13:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.826 15:13:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.826 15:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 [2024-11-06 15:13:17.956221] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.826 15:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.826 15:13:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:14:48.826 15:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 [2024-11-06 15:13:17.964338] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:48.826 15:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.826 15:13:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:48.826 15:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 null0 00:14:48.826 15:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.826 15:13:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:48.826 15:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 null1 00:14:48.826 15:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.826 15:13:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:48.826 15:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:48.826 15:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.826 15:13:17 -- host/discovery.sh@45 -- # hostpid=70711 00:14:48.826 15:13:17 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:48.826 15:13:17 -- host/discovery.sh@46 -- # waitforlisten 70711 /tmp/host.sock 00:14:48.826 15:13:17 -- common/autotest_common.sh@829 -- # '[' -z 70711 ']' 00:14:48.826 15:13:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:48.826 15:13:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.826 15:13:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:48.826 15:13:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.826 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:48.826 [2024-11-06 15:13:18.049844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.826 [2024-11-06 15:13:18.050379] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70711 ] 00:14:49.084 [2024-11-06 15:13:18.189760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.084 [2024-11-06 15:13:18.242990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.084 [2024-11-06 15:13:18.243466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.018 15:13:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.018 15:13:19 -- common/autotest_common.sh@862 -- # return 0 00:14:50.018 15:13:19 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.018 15:13:19 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@72 -- # notify_id=0 00:14:50.018 15:13:19 -- host/discovery.sh@78 -- # get_subsystem_names 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # sort 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # xargs 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:14:50.018 15:13:19 -- host/discovery.sh@79 -- # get_bdev_list 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # sort 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # xargs 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:14:50.018 15:13:19 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@82 -- # get_subsystem_names 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # sort 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # xargs 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:14:50.018 15:13:19 -- host/discovery.sh@83 -- # get_bdev_list 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # sort 00:14:50.018 15:13:19 -- host/discovery.sh@55 -- # xargs 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:50.018 15:13:19 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.018 15:13:19 -- host/discovery.sh@86 -- # get_subsystem_names 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:50.018 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.018 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # sort 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:50.018 15:13:19 -- host/discovery.sh@59 -- # xargs 00:14:50.277 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.277 15:13:19 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:14:50.277 15:13:19 -- host/discovery.sh@87 -- # get_bdev_list 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # sort 00:14:50.277 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # xargs 00:14:50.277 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.277 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.277 15:13:19 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:50.277 15:13:19 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:50.277 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.277 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.277 [2024-11-06 15:13:19.392800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.277 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.277 15:13:19 -- host/discovery.sh@92 -- # get_subsystem_names 00:14:50.277 15:13:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:50.277 15:13:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:50.277 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.277 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.277 15:13:19 -- host/discovery.sh@59 -- # xargs 00:14:50.277 15:13:19 -- host/discovery.sh@59 -- # sort 00:14:50.277 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.277 15:13:19 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:50.277 15:13:19 -- host/discovery.sh@93 -- # get_bdev_list 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # sort 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:50.277 15:13:19 -- host/discovery.sh@55 -- # xargs 00:14:50.277 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.277 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.277 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.277 15:13:19 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:14:50.277 15:13:19 -- host/discovery.sh@94 -- # get_notification_count 00:14:50.277 15:13:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:50.277 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.277 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.277 15:13:19 -- host/discovery.sh@74 -- # jq '. | length' 00:14:50.277 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.536 15:13:19 -- host/discovery.sh@74 -- # notification_count=0 00:14:50.536 15:13:19 -- host/discovery.sh@75 -- # notify_id=0 00:14:50.536 15:13:19 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:14:50.536 15:13:19 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:50.536 15:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.536 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:50.536 15:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.536 15:13:19 -- host/discovery.sh@100 -- # sleep 1 00:14:50.794 [2024-11-06 15:13:20.049175] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:50.794 [2024-11-06 15:13:20.049216] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:50.795 [2024-11-06 15:13:20.049234] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:50.795 [2024-11-06 15:13:20.055275] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:51.052 [2024-11-06 15:13:20.111047] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:51.052 [2024-11-06 15:13:20.111273] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:51.313 15:13:20 -- host/discovery.sh@101 -- # get_subsystem_names 00:14:51.314 15:13:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:51.314 15:13:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:51.314 15:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.314 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:14:51.314 15:13:20 -- host/discovery.sh@59 -- # xargs 00:14:51.314 15:13:20 -- host/discovery.sh@59 -- # sort 00:14:51.606 15:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@102 -- # get_bdev_list 00:14:51.606 15:13:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:51.606 15:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 15:13:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:51.606 15:13:20 -- host/discovery.sh@55 -- # sort 00:14:51.606 15:13:20 -- host/discovery.sh@55 -- # xargs 00:14:51.606 15:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:14:51.606 15:13:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:51.606 15:13:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:51.606 15:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 15:13:20 -- host/discovery.sh@63 -- # sort -n 00:14:51.606 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 15:13:20 -- host/discovery.sh@63 -- # xargs 00:14:51.606 15:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@104 -- # get_notification_count 00:14:51.606 15:13:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:51.606 15:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 15:13:20 -- host/discovery.sh@74 -- # jq '. | length' 00:14:51.606 15:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@74 -- # notification_count=1 00:14:51.606 15:13:20 -- host/discovery.sh@75 -- # notify_id=1 00:14:51.606 15:13:20 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:51.606 15:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 15:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 15:13:20 -- host/discovery.sh@109 -- # sleep 1 00:14:52.550 15:13:21 -- host/discovery.sh@110 -- # get_bdev_list 00:14:52.550 15:13:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:52.550 15:13:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:52.550 15:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.550 15:13:21 -- host/discovery.sh@55 -- # sort 00:14:52.550 15:13:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.550 15:13:21 -- host/discovery.sh@55 -- # xargs 00:14:52.808 15:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.808 15:13:21 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:52.808 15:13:21 -- host/discovery.sh@111 -- # get_notification_count 00:14:52.808 15:13:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:52.808 15:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.808 15:13:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.808 15:13:21 -- host/discovery.sh@74 -- # jq '. | length' 00:14:52.808 15:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.808 15:13:21 -- host/discovery.sh@74 -- # notification_count=1 00:14:52.808 15:13:21 -- host/discovery.sh@75 -- # notify_id=2 00:14:52.808 15:13:21 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:14:52.808 15:13:21 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:14:52.808 15:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.808 15:13:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.808 [2024-11-06 15:13:21.927609] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:52.808 [2024-11-06 15:13:21.927875] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:52.808 [2024-11-06 15:13:21.927907] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:52.808 15:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.808 15:13:21 -- host/discovery.sh@117 -- # sleep 1 00:14:52.808 [2024-11-06 15:13:21.933872] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:14:52.808 [2024-11-06 15:13:21.992338] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:52.808 [2024-11-06 15:13:21.992362] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:52.808 [2024-11-06 15:13:21.992368] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:53.743 15:13:22 -- host/discovery.sh@118 -- # get_subsystem_names 00:14:53.743 15:13:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:53.743 15:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.743 15:13:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.743 15:13:22 -- host/discovery.sh@59 -- # sort 00:14:53.743 15:13:22 -- host/discovery.sh@59 -- # xargs 00:14:53.743 15:13:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:53.743 15:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.743 15:13:22 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.743 15:13:22 -- host/discovery.sh@119 -- # get_bdev_list 00:14:53.743 15:13:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:53.743 15:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.743 15:13:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.743 15:13:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:53.743 15:13:22 -- host/discovery.sh@55 -- # sort 00:14:53.744 15:13:22 -- host/discovery.sh@55 -- # xargs 00:14:54.002 15:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:14:54.002 15:13:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:54.002 15:13:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:54.002 15:13:23 -- host/discovery.sh@63 -- # xargs 00:14:54.002 15:13:23 -- host/discovery.sh@63 -- # sort -n 00:14:54.002 15:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.002 15:13:23 -- common/autotest_common.sh@10 -- # set +x 00:14:54.002 15:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@121 -- # get_notification_count 00:14:54.002 15:13:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:54.002 15:13:23 -- host/discovery.sh@74 -- # jq '. | length' 00:14:54.002 15:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.002 15:13:23 -- common/autotest_common.sh@10 -- # set +x 00:14:54.002 15:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@74 -- # notification_count=0 00:14:54.002 15:13:23 -- host/discovery.sh@75 -- # notify_id=2 00:14:54.002 15:13:23 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:54.002 15:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.002 15:13:23 -- common/autotest_common.sh@10 -- # set +x 00:14:54.002 [2024-11-06 15:13:23.166441] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:54.002 [2024-11-06 15:13:23.166476] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:54.002 [2024-11-06 15:13:23.170146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.002 [2024-11-06 15:13:23.170181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.002 [2024-11-06 15:13:23.170193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.002 [2024-11-06 15:13:23.170202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.002 [2024-11-06 15:13:23.170210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.002 [2024-11-06 15:13:23.170218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.002 [2024-11-06 15:13:23.170226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.002 [2024-11-06 15:13:23.170235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.002 [2024-11-06 15:13:23.170243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ecc10 is same with the state(5) to be set 00:14:54.002 15:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.002 15:13:23 -- host/discovery.sh@127 -- # sleep 1 00:14:54.002 [2024-11-06 15:13:23.172432] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:14:54.002 [2024-11-06 15:13:23.172461] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:54.002 [2024-11-06 15:13:23.172518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ecc10 (9): Bad file descriptor 00:14:54.936 15:13:24 -- host/discovery.sh@128 -- # get_subsystem_names 00:14:54.936 15:13:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:54.936 15:13:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:54.936 15:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.936 15:13:24 -- host/discovery.sh@59 -- # sort 00:14:54.936 15:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:54.936 15:13:24 -- host/discovery.sh@59 -- # xargs 00:14:54.936 15:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@129 -- # get_bdev_list 00:14:55.195 15:13:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:55.195 15:13:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:55.195 15:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.195 15:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:55.195 15:13:24 -- host/discovery.sh@55 -- # sort 00:14:55.195 15:13:24 -- host/discovery.sh@55 -- # xargs 00:14:55.195 15:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:14:55.195 15:13:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:55.195 15:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.195 15:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:55.195 15:13:24 -- host/discovery.sh@63 -- # sort -n 00:14:55.195 15:13:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:55.195 15:13:24 -- host/discovery.sh@63 -- # xargs 00:14:55.195 15:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@131 -- # get_notification_count 00:14:55.195 15:13:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:55.195 15:13:24 -- host/discovery.sh@74 -- # jq '. | length' 00:14:55.195 15:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.195 15:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:55.195 15:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@74 -- # notification_count=0 00:14:55.195 15:13:24 -- host/discovery.sh@75 -- # notify_id=2 00:14:55.195 15:13:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:55.195 15:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.195 15:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:55.195 15:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.195 15:13:24 -- host/discovery.sh@135 -- # sleep 1 00:14:56.571 15:13:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:14:56.571 15:13:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:56.571 15:13:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:56.571 15:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.571 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:14:56.571 15:13:25 -- host/discovery.sh@59 -- # sort 00:14:56.571 15:13:25 -- host/discovery.sh@59 -- # xargs 00:14:56.571 15:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.571 15:13:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:14:56.571 15:13:25 -- host/discovery.sh@137 -- # get_bdev_list 00:14:56.571 15:13:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:56.571 15:13:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:56.571 15:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.571 15:13:25 -- host/discovery.sh@55 -- # sort 00:14:56.571 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:14:56.571 15:13:25 -- host/discovery.sh@55 -- # xargs 00:14:56.571 15:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.571 15:13:25 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:14:56.571 15:13:25 -- host/discovery.sh@138 -- # get_notification_count 00:14:56.571 15:13:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:56.571 15:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.571 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:14:56.571 15:13:25 -- host/discovery.sh@74 -- # jq '. | length' 00:14:56.571 15:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.571 15:13:25 -- host/discovery.sh@74 -- # notification_count=2 00:14:56.571 15:13:25 -- host/discovery.sh@75 -- # notify_id=4 00:14:56.571 15:13:25 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:14:56.571 15:13:25 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:56.571 15:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.571 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 [2024-11-06 15:13:26.601760] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:57.507 [2024-11-06 15:13:26.601791] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:57.507 [2024-11-06 15:13:26.601808] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:57.507 [2024-11-06 15:13:26.607822] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:14:57.507 [2024-11-06 15:13:26.667531] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:57.507 [2024-11-06 15:13:26.667578] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:57.507 15:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.507 15:13:26 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:57.507 15:13:26 -- common/autotest_common.sh@650 -- # local es=0 00:14:57.507 15:13:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:57.507 15:13:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:57.507 15:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.507 15:13:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:57.507 15:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.507 15:13:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:57.507 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.507 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 request: 00:14:57.507 { 00:14:57.507 "name": "nvme", 00:14:57.507 "trtype": "tcp", 00:14:57.507 "traddr": "10.0.0.2", 00:14:57.507 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:57.507 "adrfam": "ipv4", 00:14:57.507 "trsvcid": "8009", 00:14:57.507 "wait_for_attach": true, 00:14:57.507 "method": "bdev_nvme_start_discovery", 00:14:57.507 "req_id": 1 00:14:57.507 } 00:14:57.507 Got JSON-RPC error response 00:14:57.507 response: 00:14:57.507 { 00:14:57.507 "code": -17, 00:14:57.507 "message": "File exists" 00:14:57.507 } 00:14:57.507 15:13:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:57.507 15:13:26 -- common/autotest_common.sh@653 -- # es=1 00:14:57.507 15:13:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.507 15:13:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.507 15:13:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.507 15:13:26 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:14:57.507 15:13:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:57.507 15:13:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:57.507 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.507 15:13:26 -- host/discovery.sh@67 -- # sort 00:14:57.507 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 15:13:26 -- host/discovery.sh@67 -- # xargs 00:14:57.507 15:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.507 15:13:26 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:14:57.507 15:13:26 -- host/discovery.sh@147 -- # get_bdev_list 00:14:57.507 15:13:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:57.507 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.507 15:13:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:57.507 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 15:13:26 -- host/discovery.sh@55 -- # sort 00:14:57.507 15:13:26 -- host/discovery.sh@55 -- # xargs 00:14:57.507 15:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.766 15:13:26 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:57.766 15:13:26 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:57.766 15:13:26 -- common/autotest_common.sh@650 -- # local es=0 00:14:57.766 15:13:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:57.766 15:13:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:57.766 15:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.766 15:13:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:57.766 15:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.767 15:13:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:57.767 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.767 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.767 request: 00:14:57.767 { 00:14:57.767 "name": "nvme_second", 00:14:57.767 "trtype": "tcp", 00:14:57.767 "traddr": "10.0.0.2", 00:14:57.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:57.767 "adrfam": "ipv4", 00:14:57.767 "trsvcid": "8009", 00:14:57.767 "wait_for_attach": true, 00:14:57.767 "method": "bdev_nvme_start_discovery", 00:14:57.767 "req_id": 1 00:14:57.767 } 00:14:57.767 Got JSON-RPC error response 00:14:57.767 response: 00:14:57.767 { 00:14:57.767 "code": -17, 00:14:57.767 "message": "File exists" 00:14:57.767 } 00:14:57.767 15:13:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:57.767 15:13:26 -- common/autotest_common.sh@653 -- # es=1 00:14:57.767 15:13:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.767 15:13:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.767 15:13:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.767 15:13:26 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:14:57.767 15:13:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:57.767 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.767 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.767 15:13:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:57.767 15:13:26 -- host/discovery.sh@67 -- # sort 00:14:57.767 15:13:26 -- host/discovery.sh@67 -- # xargs 00:14:57.767 15:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.767 15:13:26 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:14:57.767 15:13:26 -- host/discovery.sh@153 -- # get_bdev_list 00:14:57.767 15:13:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:57.767 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.767 15:13:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:57.767 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.767 15:13:26 -- host/discovery.sh@55 -- # sort 00:14:57.767 15:13:26 -- host/discovery.sh@55 -- # xargs 00:14:57.767 15:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.767 15:13:26 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:57.767 15:13:26 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:57.767 15:13:26 -- common/autotest_common.sh@650 -- # local es=0 00:14:57.767 15:13:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:57.767 15:13:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:57.767 15:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.767 15:13:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:57.767 15:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.767 15:13:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:57.767 15:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.767 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:58.702 [2024-11-06 15:13:27.937253] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:58.702 [2024-11-06 15:13:27.937401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:58.702 [2024-11-06 15:13:27.937459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:58.702 [2024-11-06 15:13:27.937491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ef8a0 with addr=10.0.0.2, port=8010 00:14:58.702 [2024-11-06 15:13:27.937510] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:58.702 [2024-11-06 15:13:27.937520] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:58.702 [2024-11-06 15:13:27.937529] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:00.078 [2024-11-06 15:13:28.937210] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:00.078 [2024-11-06 15:13:28.937316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:00.078 [2024-11-06 15:13:28.937358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:00.078 [2024-11-06 15:13:28.937374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ef8a0 with addr=10.0.0.2, port=8010 00:15:00.078 [2024-11-06 15:13:28.937391] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:00.078 [2024-11-06 15:13:28.937401] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:00.078 [2024-11-06 15:13:28.937410] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:01.013 [2024-11-06 15:13:29.937099] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:01.013 request: 00:15:01.013 { 00:15:01.013 "name": "nvme_second", 00:15:01.013 "trtype": "tcp", 00:15:01.013 "traddr": "10.0.0.2", 00:15:01.013 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:01.013 "adrfam": "ipv4", 00:15:01.013 "trsvcid": "8010", 00:15:01.013 "attach_timeout_ms": 3000, 00:15:01.013 "method": "bdev_nvme_start_discovery", 00:15:01.013 "req_id": 1 00:15:01.013 } 00:15:01.013 Got JSON-RPC error response 00:15:01.013 response: 00:15:01.013 { 00:15:01.013 "code": -110, 00:15:01.013 "message": "Connection timed out" 00:15:01.013 } 00:15:01.013 15:13:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:01.013 15:13:29 -- common/autotest_common.sh@653 -- # es=1 00:15:01.013 15:13:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.013 15:13:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.013 15:13:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.013 15:13:29 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:15:01.013 15:13:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:01.013 15:13:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.013 15:13:29 -- common/autotest_common.sh@10 -- # set +x 00:15:01.013 15:13:29 -- host/discovery.sh@67 -- # sort 00:15:01.013 15:13:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:01.013 15:13:29 -- host/discovery.sh@67 -- # xargs 00:15:01.013 15:13:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.013 15:13:30 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:15:01.013 15:13:30 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:15:01.013 15:13:30 -- host/discovery.sh@162 -- # kill 70711 00:15:01.013 15:13:30 -- host/discovery.sh@163 -- # nvmftestfini 00:15:01.013 15:13:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.013 15:13:30 -- nvmf/common.sh@116 -- # sync 00:15:01.013 15:13:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.013 15:13:30 -- nvmf/common.sh@119 -- # set +e 00:15:01.013 15:13:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.013 15:13:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.013 rmmod nvme_tcp 00:15:01.013 rmmod nvme_fabrics 00:15:01.013 rmmod nvme_keyring 00:15:01.013 15:13:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.013 15:13:30 -- nvmf/common.sh@123 -- # set -e 00:15:01.013 15:13:30 -- nvmf/common.sh@124 -- # return 0 00:15:01.013 15:13:30 -- nvmf/common.sh@477 -- # '[' -n 70679 ']' 00:15:01.013 15:13:30 -- nvmf/common.sh@478 -- # killprocess 70679 00:15:01.013 15:13:30 -- common/autotest_common.sh@936 -- # '[' -z 70679 ']' 00:15:01.013 15:13:30 -- common/autotest_common.sh@940 -- # kill -0 70679 00:15:01.013 15:13:30 -- common/autotest_common.sh@941 -- # uname 00:15:01.013 15:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.013 15:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70679 00:15:01.013 15:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:01.013 15:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:01.013 killing process with pid 70679 00:15:01.013 15:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70679' 00:15:01.013 15:13:30 -- common/autotest_common.sh@955 -- # kill 70679 00:15:01.013 15:13:30 -- common/autotest_common.sh@960 -- # wait 70679 00:15:01.272 15:13:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:01.272 15:13:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:01.272 15:13:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:01.272 15:13:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.272 15:13:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:01.272 15:13:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.272 15:13:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.272 15:13:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.272 15:13:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:01.272 00:15:01.272 real 0m14.085s 00:15:01.272 user 0m26.960s 00:15:01.272 sys 0m2.225s 00:15:01.272 15:13:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:01.272 15:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:01.272 ************************************ 00:15:01.272 END TEST nvmf_discovery 00:15:01.272 ************************************ 00:15:01.272 15:13:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:01.272 15:13:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:01.272 15:13:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:01.272 15:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:01.272 ************************************ 00:15:01.272 START TEST nvmf_discovery_remove_ifc 00:15:01.272 ************************************ 00:15:01.272 15:13:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:01.272 * Looking for test storage... 00:15:01.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:01.272 15:13:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:01.272 15:13:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:01.272 15:13:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:01.272 15:13:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:01.272 15:13:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:01.272 15:13:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:01.272 15:13:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:01.272 15:13:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:01.272 15:13:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:01.272 15:13:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.272 15:13:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:01.272 15:13:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:01.272 15:13:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:01.272 15:13:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:01.272 15:13:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:01.272 15:13:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:01.272 15:13:30 -- scripts/common.sh@344 -- # : 1 00:15:01.272 15:13:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:01.272 15:13:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.272 15:13:30 -- scripts/common.sh@364 -- # decimal 1 00:15:01.531 15:13:30 -- scripts/common.sh@352 -- # local d=1 00:15:01.531 15:13:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.531 15:13:30 -- scripts/common.sh@354 -- # echo 1 00:15:01.531 15:13:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:01.531 15:13:30 -- scripts/common.sh@365 -- # decimal 2 00:15:01.531 15:13:30 -- scripts/common.sh@352 -- # local d=2 00:15:01.531 15:13:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.531 15:13:30 -- scripts/common.sh@354 -- # echo 2 00:15:01.531 15:13:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:01.531 15:13:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:01.531 15:13:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:01.532 15:13:30 -- scripts/common.sh@367 -- # return 0 00:15:01.532 15:13:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.532 15:13:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.532 --rc genhtml_branch_coverage=1 00:15:01.532 --rc genhtml_function_coverage=1 00:15:01.532 --rc genhtml_legend=1 00:15:01.532 --rc geninfo_all_blocks=1 00:15:01.532 --rc geninfo_unexecuted_blocks=1 00:15:01.532 00:15:01.532 ' 00:15:01.532 15:13:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.532 --rc genhtml_branch_coverage=1 00:15:01.532 --rc genhtml_function_coverage=1 00:15:01.532 --rc genhtml_legend=1 00:15:01.532 --rc geninfo_all_blocks=1 00:15:01.532 --rc geninfo_unexecuted_blocks=1 00:15:01.532 00:15:01.532 ' 00:15:01.532 15:13:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.532 --rc genhtml_branch_coverage=1 00:15:01.532 --rc genhtml_function_coverage=1 00:15:01.532 --rc genhtml_legend=1 00:15:01.532 --rc geninfo_all_blocks=1 00:15:01.532 --rc geninfo_unexecuted_blocks=1 00:15:01.532 00:15:01.532 ' 00:15:01.532 15:13:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.532 --rc genhtml_branch_coverage=1 00:15:01.532 --rc genhtml_function_coverage=1 00:15:01.532 --rc genhtml_legend=1 00:15:01.532 --rc geninfo_all_blocks=1 00:15:01.532 --rc geninfo_unexecuted_blocks=1 00:15:01.532 00:15:01.532 ' 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:01.532 15:13:30 -- nvmf/common.sh@7 -- # uname -s 00:15:01.532 15:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.532 15:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.532 15:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.532 15:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.532 15:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.532 15:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.532 15:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.532 15:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.532 15:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.532 15:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.532 15:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:15:01.532 15:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:15:01.532 15:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.532 15:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.532 15:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:01.532 15:13:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.532 15:13:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.532 15:13:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.532 15:13:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.532 15:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.532 15:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.532 15:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.532 15:13:30 -- paths/export.sh@5 -- # export PATH 00:15:01.532 15:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.532 15:13:30 -- nvmf/common.sh@46 -- # : 0 00:15:01.532 15:13:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:01.532 15:13:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:01.532 15:13:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:01.532 15:13:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.532 15:13:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.532 15:13:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:01.532 15:13:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:01.532 15:13:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:01.532 15:13:30 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:01.532 15:13:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:01.532 15:13:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.532 15:13:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:01.532 15:13:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:01.532 15:13:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:01.532 15:13:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.532 15:13:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.532 15:13:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.532 15:13:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:01.532 15:13:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:01.532 15:13:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:01.532 15:13:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:01.532 15:13:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:01.532 15:13:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:01.532 15:13:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.532 15:13:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.532 15:13:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:01.532 15:13:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:01.532 15:13:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:01.532 15:13:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:01.532 15:13:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:01.532 15:13:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.532 15:13:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:01.532 15:13:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:01.532 15:13:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:01.532 15:13:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:01.532 15:13:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:01.532 15:13:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:01.532 Cannot find device "nvmf_tgt_br" 00:15:01.532 15:13:30 -- nvmf/common.sh@154 -- # true 00:15:01.532 15:13:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:01.532 Cannot find device "nvmf_tgt_br2" 00:15:01.532 15:13:30 -- nvmf/common.sh@155 -- # true 00:15:01.532 15:13:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:01.532 15:13:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:01.532 Cannot find device "nvmf_tgt_br" 00:15:01.532 15:13:30 -- nvmf/common.sh@157 -- # true 00:15:01.532 15:13:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:01.532 Cannot find device "nvmf_tgt_br2" 00:15:01.532 15:13:30 -- nvmf/common.sh@158 -- # true 00:15:01.532 15:13:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:01.532 15:13:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:01.532 15:13:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.532 15:13:30 -- nvmf/common.sh@161 -- # true 00:15:01.532 15:13:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.532 15:13:30 -- nvmf/common.sh@162 -- # true 00:15:01.532 15:13:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:01.532 15:13:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:01.532 15:13:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:01.532 15:13:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:01.532 15:13:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:01.532 15:13:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:01.532 15:13:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:01.532 15:13:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:01.532 15:13:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:01.532 15:13:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:01.532 15:13:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:01.791 15:13:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:01.791 15:13:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:01.791 15:13:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:01.791 15:13:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:01.791 15:13:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:01.791 15:13:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:01.791 15:13:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:01.791 15:13:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:01.791 15:13:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:01.791 15:13:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:01.791 15:13:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:01.791 15:13:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:01.791 15:13:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:01.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:01.791 00:15:01.791 --- 10.0.0.2 ping statistics --- 00:15:01.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.791 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:01.791 15:13:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:01.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:01.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:01.791 00:15:01.791 --- 10.0.0.3 ping statistics --- 00:15:01.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.791 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:01.791 15:13:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:01.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:01.791 00:15:01.791 --- 10.0.0.1 ping statistics --- 00:15:01.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.791 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:01.791 15:13:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.791 15:13:30 -- nvmf/common.sh@421 -- # return 0 00:15:01.791 15:13:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:01.791 15:13:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.792 15:13:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:01.792 15:13:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:01.792 15:13:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.792 15:13:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:01.792 15:13:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:01.792 15:13:30 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:01.792 15:13:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:01.792 15:13:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.792 15:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 15:13:30 -- nvmf/common.sh@469 -- # nvmfpid=71208 00:15:01.792 15:13:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:01.792 15:13:30 -- nvmf/common.sh@470 -- # waitforlisten 71208 00:15:01.792 15:13:30 -- common/autotest_common.sh@829 -- # '[' -z 71208 ']' 00:15:01.792 15:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.792 15:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.792 15:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.792 15:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.792 15:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 [2024-11-06 15:13:30.990513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:01.792 [2024-11-06 15:13:30.990603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.050 [2024-11-06 15:13:31.128942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.050 [2024-11-06 15:13:31.181804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.050 [2024-11-06 15:13:31.181922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.050 [2024-11-06 15:13:31.181933] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.050 [2024-11-06 15:13:31.181940] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.050 [2024-11-06 15:13:31.181969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.984 15:13:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.984 15:13:31 -- common/autotest_common.sh@862 -- # return 0 00:15:02.984 15:13:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.984 15:13:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.984 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.984 15:13:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.984 15:13:31 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:02.984 15:13:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.984 15:13:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.984 [2024-11-06 15:13:32.016036] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.984 [2024-11-06 15:13:32.024108] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:02.984 null0 00:15:02.984 [2024-11-06 15:13:32.056056] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.984 15:13:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.984 15:13:32 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71240 00:15:02.984 15:13:32 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:02.984 15:13:32 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71240 /tmp/host.sock 00:15:02.984 15:13:32 -- common/autotest_common.sh@829 -- # '[' -z 71240 ']' 00:15:02.984 15:13:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:02.984 15:13:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.984 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:02.984 15:13:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:02.984 15:13:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.984 15:13:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.984 [2024-11-06 15:13:32.138984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:02.984 [2024-11-06 15:13:32.139091] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:15:03.242 [2024-11-06 15:13:32.279503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.242 [2024-11-06 15:13:32.350220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:03.242 [2024-11-06 15:13:32.350399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.809 15:13:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.809 15:13:33 -- common/autotest_common.sh@862 -- # return 0 00:15:03.809 15:13:33 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.809 15:13:33 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:03.809 15:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.809 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.809 15:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.809 15:13:33 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:03.809 15:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.809 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:15:04.068 15:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.068 15:13:33 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:04.068 15:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.068 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:15:05.003 [2024-11-06 15:13:34.118954] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:05.003 [2024-11-06 15:13:34.119017] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:05.003 [2024-11-06 15:13:34.119036] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:05.003 [2024-11-06 15:13:34.124995] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:05.003 [2024-11-06 15:13:34.180594] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:05.003 [2024-11-06 15:13:34.180658] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:05.003 [2024-11-06 15:13:34.180711] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:05.003 [2024-11-06 15:13:34.180728] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:05.003 [2024-11-06 15:13:34.180752] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:05.003 15:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.003 [2024-11-06 15:13:34.187605] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1767be0 was disconnected and freed. delete nvme_qpair. 00:15:05.003 15:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:05.003 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:05.003 15:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:05.003 15:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.003 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.003 15:13:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:05.262 15:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.262 15:13:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:05.262 15:13:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:06.228 15:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.228 15:13:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:06.228 15:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:06.228 15:13:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:07.164 15:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:07.164 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.164 15:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:07.164 15:13:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:08.540 15:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:08.540 15:13:37 -- common/autotest_common.sh@10 -- # set +x 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:08.540 15:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:08.540 15:13:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:09.479 15:13:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:09.479 15:13:38 -- common/autotest_common.sh@10 -- # set +x 00:15:09.479 15:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:09.479 15:13:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:10.414 15:13:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:10.414 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:10.414 15:13:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.414 [2024-11-06 15:13:39.611560] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:10.414 [2024-11-06 15:13:39.611682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.414 [2024-11-06 15:13:39.611735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.414 [2024-11-06 15:13:39.611751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.414 [2024-11-06 15:13:39.611761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.414 [2024-11-06 15:13:39.611770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.414 [2024-11-06 15:13:39.611778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.414 [2024-11-06 15:13:39.611788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.414 [2024-11-06 15:13:39.611797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.414 [2024-11-06 15:13:39.611806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.414 [2024-11-06 15:13:39.611814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.414 [2024-11-06 15:13:39.611823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dcde0 is same with the state(5) to be set 00:15:10.414 [2024-11-06 15:13:39.621540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dcde0 (9): Bad file descriptor 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:10.414 15:13:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:10.414 [2024-11-06 15:13:39.631591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:11.790 15:13:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:11.790 15:13:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:11.790 15:13:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:11.790 15:13:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:11.790 15:13:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:11.790 15:13:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.790 15:13:40 -- common/autotest_common.sh@10 -- # set +x 00:15:11.790 [2024-11-06 15:13:40.651762] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:12.725 [2024-11-06 15:13:41.675831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:13.661 [2024-11-06 15:13:42.699790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:13.661 [2024-11-06 15:13:42.699909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dcde0 with addr=10.0.0.2, port=4420 00:15:13.661 [2024-11-06 15:13:42.699943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dcde0 is same with the state(5) to be set 00:15:13.661 [2024-11-06 15:13:42.699993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:13.661 [2024-11-06 15:13:42.700015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:13.661 [2024-11-06 15:13:42.700032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:13.661 [2024-11-06 15:13:42.700051] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:13.661 [2024-11-06 15:13:42.700865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dcde0 (9): Bad file descriptor 00:15:13.661 [2024-11-06 15:13:42.700943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:13.661 [2024-11-06 15:13:42.700995] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:13.661 [2024-11-06 15:13:42.701064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.661 [2024-11-06 15:13:42.701095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.661 [2024-11-06 15:13:42.701129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.661 [2024-11-06 15:13:42.701151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.661 [2024-11-06 15:13:42.701172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.661 [2024-11-06 15:13:42.701192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.662 [2024-11-06 15:13:42.701214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.662 [2024-11-06 15:13:42.701234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.662 [2024-11-06 15:13:42.701255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.662 [2024-11-06 15:13:42.701276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.662 [2024-11-06 15:13:42.701295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:13.662 [2024-11-06 15:13:42.701356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dd1f0 (9): Bad file descriptor 00:15:13.662 [2024-11-06 15:13:42.702354] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:13.662 [2024-11-06 15:13:42.702402] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:13.662 15:13:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.662 15:13:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:13.662 15:13:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:14.597 15:13:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.597 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:15:14.597 15:13:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:14.597 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:14.598 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:14.598 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:14.598 15:13:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:14.598 15:13:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.598 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:15:14.598 15:13:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.598 15:13:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:14.598 15:13:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:15.533 [2024-11-06 15:13:44.714120] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:15.533 [2024-11-06 15:13:44.714157] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:15.533 [2024-11-06 15:13:44.714191] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:15.533 [2024-11-06 15:13:44.720154] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:15.533 [2024-11-06 15:13:44.774959] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:15.533 [2024-11-06 15:13:44.775019] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:15.533 [2024-11-06 15:13:44.775040] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:15.533 [2024-11-06 15:13:44.775054] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:15.533 [2024-11-06 15:13:44.775063] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:15.533 [2024-11-06 15:13:44.782765] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x171ece0 was disconnected and freed. delete nvme_qpair. 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:15.792 15:13:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:15.792 15:13:44 -- common/autotest_common.sh@10 -- # set +x 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:15.792 15:13:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:15.792 15:13:44 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71240 00:15:15.792 15:13:44 -- common/autotest_common.sh@936 -- # '[' -z 71240 ']' 00:15:15.792 15:13:44 -- common/autotest_common.sh@940 -- # kill -0 71240 00:15:15.792 15:13:44 -- common/autotest_common.sh@941 -- # uname 00:15:15.792 15:13:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.792 15:13:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71240 00:15:15.792 killing process with pid 71240 00:15:15.792 15:13:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.792 15:13:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.792 15:13:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71240' 00:15:15.792 15:13:44 -- common/autotest_common.sh@955 -- # kill 71240 00:15:15.792 15:13:44 -- common/autotest_common.sh@960 -- # wait 71240 00:15:16.051 15:13:45 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:16.051 15:13:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:16.051 15:13:45 -- nvmf/common.sh@116 -- # sync 00:15:16.051 15:13:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:16.051 15:13:45 -- nvmf/common.sh@119 -- # set +e 00:15:16.051 15:13:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:16.051 15:13:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:16.051 rmmod nvme_tcp 00:15:16.051 rmmod nvme_fabrics 00:15:16.051 rmmod nvme_keyring 00:15:16.051 15:13:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:16.051 15:13:45 -- nvmf/common.sh@123 -- # set -e 00:15:16.051 15:13:45 -- nvmf/common.sh@124 -- # return 0 00:15:16.051 15:13:45 -- nvmf/common.sh@477 -- # '[' -n 71208 ']' 00:15:16.051 15:13:45 -- nvmf/common.sh@478 -- # killprocess 71208 00:15:16.051 15:13:45 -- common/autotest_common.sh@936 -- # '[' -z 71208 ']' 00:15:16.051 15:13:45 -- common/autotest_common.sh@940 -- # kill -0 71208 00:15:16.051 15:13:45 -- common/autotest_common.sh@941 -- # uname 00:15:16.051 15:13:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.051 15:13:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71208 00:15:16.051 killing process with pid 71208 00:15:16.051 15:13:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:16.051 15:13:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:16.051 15:13:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71208' 00:15:16.051 15:13:45 -- common/autotest_common.sh@955 -- # kill 71208 00:15:16.051 15:13:45 -- common/autotest_common.sh@960 -- # wait 71208 00:15:16.309 15:13:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:16.309 15:13:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:16.309 15:13:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:16.309 15:13:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.309 15:13:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:16.309 15:13:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.309 15:13:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.309 15:13:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.309 15:13:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:16.309 00:15:16.309 real 0m15.089s 00:15:16.309 user 0m24.255s 00:15:16.309 sys 0m2.421s 00:15:16.309 15:13:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:16.309 ************************************ 00:15:16.309 END TEST nvmf_discovery_remove_ifc 00:15:16.309 ************************************ 00:15:16.309 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.309 15:13:45 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:15:16.309 15:13:45 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:16.309 15:13:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.309 15:13:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.309 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.309 ************************************ 00:15:16.309 START TEST nvmf_digest 00:15:16.309 ************************************ 00:15:16.309 15:13:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:16.568 * Looking for test storage... 00:15:16.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:16.568 15:13:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:16.568 15:13:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:16.568 15:13:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:16.568 15:13:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:16.568 15:13:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:16.568 15:13:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:16.568 15:13:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:16.568 15:13:45 -- scripts/common.sh@335 -- # IFS=.-: 00:15:16.568 15:13:45 -- scripts/common.sh@335 -- # read -ra ver1 00:15:16.568 15:13:45 -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.568 15:13:45 -- scripts/common.sh@336 -- # read -ra ver2 00:15:16.568 15:13:45 -- scripts/common.sh@337 -- # local 'op=<' 00:15:16.568 15:13:45 -- scripts/common.sh@339 -- # ver1_l=2 00:15:16.568 15:13:45 -- scripts/common.sh@340 -- # ver2_l=1 00:15:16.568 15:13:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:16.568 15:13:45 -- scripts/common.sh@343 -- # case "$op" in 00:15:16.568 15:13:45 -- scripts/common.sh@344 -- # : 1 00:15:16.568 15:13:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:16.568 15:13:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.568 15:13:45 -- scripts/common.sh@364 -- # decimal 1 00:15:16.568 15:13:45 -- scripts/common.sh@352 -- # local d=1 00:15:16.568 15:13:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.568 15:13:45 -- scripts/common.sh@354 -- # echo 1 00:15:16.568 15:13:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:16.568 15:13:45 -- scripts/common.sh@365 -- # decimal 2 00:15:16.568 15:13:45 -- scripts/common.sh@352 -- # local d=2 00:15:16.568 15:13:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.568 15:13:45 -- scripts/common.sh@354 -- # echo 2 00:15:16.568 15:13:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:16.568 15:13:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:16.568 15:13:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:16.568 15:13:45 -- scripts/common.sh@367 -- # return 0 00:15:16.568 15:13:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.568 15:13:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.568 --rc genhtml_branch_coverage=1 00:15:16.568 --rc genhtml_function_coverage=1 00:15:16.568 --rc genhtml_legend=1 00:15:16.568 --rc geninfo_all_blocks=1 00:15:16.568 --rc geninfo_unexecuted_blocks=1 00:15:16.568 00:15:16.568 ' 00:15:16.568 15:13:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.568 --rc genhtml_branch_coverage=1 00:15:16.568 --rc genhtml_function_coverage=1 00:15:16.568 --rc genhtml_legend=1 00:15:16.568 --rc geninfo_all_blocks=1 00:15:16.568 --rc geninfo_unexecuted_blocks=1 00:15:16.568 00:15:16.568 ' 00:15:16.568 15:13:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.568 --rc genhtml_branch_coverage=1 00:15:16.568 --rc genhtml_function_coverage=1 00:15:16.568 --rc genhtml_legend=1 00:15:16.568 --rc geninfo_all_blocks=1 00:15:16.568 --rc geninfo_unexecuted_blocks=1 00:15:16.568 00:15:16.568 ' 00:15:16.568 15:13:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.568 --rc genhtml_branch_coverage=1 00:15:16.568 --rc genhtml_function_coverage=1 00:15:16.569 --rc genhtml_legend=1 00:15:16.569 --rc geninfo_all_blocks=1 00:15:16.569 --rc geninfo_unexecuted_blocks=1 00:15:16.569 00:15:16.569 ' 00:15:16.569 15:13:45 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.569 15:13:45 -- nvmf/common.sh@7 -- # uname -s 00:15:16.569 15:13:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.569 15:13:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.569 15:13:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.569 15:13:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.569 15:13:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.569 15:13:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.569 15:13:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.569 15:13:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.569 15:13:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.569 15:13:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.569 15:13:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:15:16.569 15:13:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:15:16.569 15:13:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.569 15:13:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.569 15:13:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.569 15:13:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.569 15:13:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.569 15:13:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.569 15:13:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.569 15:13:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.569 15:13:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.569 15:13:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.569 15:13:45 -- paths/export.sh@5 -- # export PATH 00:15:16.569 15:13:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.569 15:13:45 -- nvmf/common.sh@46 -- # : 0 00:15:16.569 15:13:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.569 15:13:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.569 15:13:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.569 15:13:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.569 15:13:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.569 15:13:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.569 15:13:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.569 15:13:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.569 15:13:45 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:16.569 15:13:45 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:16.569 15:13:45 -- host/digest.sh@16 -- # runtime=2 00:15:16.569 15:13:45 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:15:16.569 15:13:45 -- host/digest.sh@132 -- # nvmftestinit 00:15:16.569 15:13:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.569 15:13:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.569 15:13:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.569 15:13:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.569 15:13:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.569 15:13:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.569 15:13:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.569 15:13:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.569 15:13:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:16.569 15:13:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:16.569 15:13:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:16.569 15:13:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:16.569 15:13:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:16.569 15:13:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:16.569 15:13:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.569 15:13:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.569 15:13:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.569 15:13:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:16.569 15:13:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.569 15:13:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.569 15:13:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.569 15:13:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.569 15:13:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.569 15:13:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.569 15:13:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.569 15:13:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.569 15:13:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:16.569 15:13:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:16.569 Cannot find device "nvmf_tgt_br" 00:15:16.569 15:13:45 -- nvmf/common.sh@154 -- # true 00:15:16.569 15:13:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.569 Cannot find device "nvmf_tgt_br2" 00:15:16.569 15:13:45 -- nvmf/common.sh@155 -- # true 00:15:16.569 15:13:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:16.569 15:13:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:16.569 Cannot find device "nvmf_tgt_br" 00:15:16.569 15:13:45 -- nvmf/common.sh@157 -- # true 00:15:16.569 15:13:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:16.569 Cannot find device "nvmf_tgt_br2" 00:15:16.569 15:13:45 -- nvmf/common.sh@158 -- # true 00:15:16.569 15:13:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:16.828 15:13:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:16.828 15:13:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.828 15:13:45 -- nvmf/common.sh@161 -- # true 00:15:16.828 15:13:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.828 15:13:45 -- nvmf/common.sh@162 -- # true 00:15:16.828 15:13:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.828 15:13:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.828 15:13:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.828 15:13:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.828 15:13:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.828 15:13:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.828 15:13:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.828 15:13:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.828 15:13:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.828 15:13:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:16.828 15:13:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:16.828 15:13:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:16.828 15:13:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:16.828 15:13:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.828 15:13:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.828 15:13:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.828 15:13:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:16.828 15:13:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:16.828 15:13:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.828 15:13:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.828 15:13:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.828 15:13:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.828 15:13:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.828 15:13:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:16.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:16.828 00:15:16.828 --- 10.0.0.2 ping statistics --- 00:15:16.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.828 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:16.828 15:13:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:16.828 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.828 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:16.828 00:15:16.828 --- 10.0.0.3 ping statistics --- 00:15:16.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.828 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:16.828 15:13:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:16.828 00:15:16.828 --- 10.0.0.1 ping statistics --- 00:15:16.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.828 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:16.828 15:13:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.828 15:13:46 -- nvmf/common.sh@421 -- # return 0 00:15:16.828 15:13:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:16.828 15:13:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.828 15:13:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:16.828 15:13:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:16.828 15:13:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.828 15:13:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:16.828 15:13:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:16.828 15:13:46 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:16.828 15:13:46 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:15:16.828 15:13:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:16.828 15:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.828 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.828 ************************************ 00:15:16.828 START TEST nvmf_digest_clean 00:15:16.828 ************************************ 00:15:16.828 15:13:46 -- common/autotest_common.sh@1114 -- # run_digest 00:15:16.828 15:13:46 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:15:16.828 15:13:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:16.828 15:13:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.828 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.828 15:13:46 -- nvmf/common.sh@469 -- # nvmfpid=71659 00:15:16.828 15:13:46 -- nvmf/common.sh@470 -- # waitforlisten 71659 00:15:16.828 15:13:46 -- common/autotest_common.sh@829 -- # '[' -z 71659 ']' 00:15:16.828 15:13:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.828 15:13:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.828 15:13:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:16.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.828 15:13:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.828 15:13:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.828 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:17.087 [2024-11-06 15:13:46.158949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.087 [2024-11-06 15:13:46.159077] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.087 [2024-11-06 15:13:46.297487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.087 [2024-11-06 15:13:46.349517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.087 [2024-11-06 15:13:46.349946] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.087 [2024-11-06 15:13:46.350045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.087 [2024-11-06 15:13:46.350164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.087 [2024-11-06 15:13:46.350258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.346 15:13:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.346 15:13:46 -- common/autotest_common.sh@862 -- # return 0 00:15:17.346 15:13:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:17.346 15:13:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.346 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:17.346 15:13:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.346 15:13:46 -- host/digest.sh@120 -- # common_target_config 00:15:17.346 15:13:46 -- host/digest.sh@43 -- # rpc_cmd 00:15:17.346 15:13:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.346 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:17.346 null0 00:15:17.346 [2024-11-06 15:13:46.511623] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.346 [2024-11-06 15:13:46.535775] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.346 15:13:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.346 15:13:46 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:15:17.346 15:13:46 -- host/digest.sh@77 -- # local rw bs qd 00:15:17.346 15:13:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:17.346 15:13:46 -- host/digest.sh@80 -- # rw=randread 00:15:17.346 15:13:46 -- host/digest.sh@80 -- # bs=4096 00:15:17.346 15:13:46 -- host/digest.sh@80 -- # qd=128 00:15:17.346 15:13:46 -- host/digest.sh@82 -- # bperfpid=71689 00:15:17.346 15:13:46 -- host/digest.sh@83 -- # waitforlisten 71689 /var/tmp/bperf.sock 00:15:17.346 15:13:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:17.346 15:13:46 -- common/autotest_common.sh@829 -- # '[' -z 71689 ']' 00:15:17.346 15:13:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:17.346 15:13:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.346 15:13:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:17.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:17.346 15:13:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.346 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:17.346 [2024-11-06 15:13:46.594158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.346 [2024-11-06 15:13:46.594260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71689 ] 00:15:17.605 [2024-11-06 15:13:46.730638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.605 [2024-11-06 15:13:46.798635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.541 15:13:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.541 15:13:47 -- common/autotest_common.sh@862 -- # return 0 00:15:18.541 15:13:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:18.541 15:13:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:18.541 15:13:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:18.541 15:13:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:18.541 15:13:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:19.108 nvme0n1 00:15:19.108 15:13:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:19.108 15:13:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:19.108 Running I/O for 2 seconds... 00:15:21.047 00:15:21.047 Latency(us) 00:15:21.047 [2024-11-06T15:13:50.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.047 [2024-11-06T15:13:50.322Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:21.047 nvme0n1 : 2.01 16251.03 63.48 0.00 0.00 7870.58 6970.65 23354.65 00:15:21.047 [2024-11-06T15:13:50.322Z] =================================================================================================================== 00:15:21.047 [2024-11-06T15:13:50.322Z] Total : 16251.03 63.48 0.00 0.00 7870.58 6970.65 23354.65 00:15:21.047 0 00:15:21.047 15:13:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:21.047 15:13:50 -- host/digest.sh@92 -- # get_accel_stats 00:15:21.047 15:13:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:21.047 15:13:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:21.047 | select(.opcode=="crc32c") 00:15:21.047 | "\(.module_name) \(.executed)"' 00:15:21.047 15:13:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:21.305 15:13:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:21.305 15:13:50 -- host/digest.sh@93 -- # exp_module=software 00:15:21.305 15:13:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:21.305 15:13:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:21.305 15:13:50 -- host/digest.sh@97 -- # killprocess 71689 00:15:21.305 15:13:50 -- common/autotest_common.sh@936 -- # '[' -z 71689 ']' 00:15:21.305 15:13:50 -- common/autotest_common.sh@940 -- # kill -0 71689 00:15:21.305 15:13:50 -- common/autotest_common.sh@941 -- # uname 00:15:21.305 15:13:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:21.305 15:13:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71689 00:15:21.564 15:13:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:21.564 killing process with pid 71689 00:15:21.564 15:13:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:21.564 15:13:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71689' 00:15:21.564 Received shutdown signal, test time was about 2.000000 seconds 00:15:21.564 00:15:21.564 Latency(us) 00:15:21.564 [2024-11-06T15:13:50.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.564 [2024-11-06T15:13:50.839Z] =================================================================================================================== 00:15:21.564 [2024-11-06T15:13:50.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:21.564 15:13:50 -- common/autotest_common.sh@955 -- # kill 71689 00:15:21.564 15:13:50 -- common/autotest_common.sh@960 -- # wait 71689 00:15:21.564 15:13:50 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:15:21.564 15:13:50 -- host/digest.sh@77 -- # local rw bs qd 00:15:21.564 15:13:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:21.564 15:13:50 -- host/digest.sh@80 -- # rw=randread 00:15:21.564 15:13:50 -- host/digest.sh@80 -- # bs=131072 00:15:21.564 15:13:50 -- host/digest.sh@80 -- # qd=16 00:15:21.564 15:13:50 -- host/digest.sh@82 -- # bperfpid=71748 00:15:21.564 15:13:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:21.564 15:13:50 -- host/digest.sh@83 -- # waitforlisten 71748 /var/tmp/bperf.sock 00:15:21.564 15:13:50 -- common/autotest_common.sh@829 -- # '[' -z 71748 ']' 00:15:21.564 15:13:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:21.564 15:13:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:21.564 15:13:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:21.564 15:13:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.564 15:13:50 -- common/autotest_common.sh@10 -- # set +x 00:15:21.564 [2024-11-06 15:13:50.818616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:21.564 [2024-11-06 15:13:50.818742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71748 ] 00:15:21.564 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:21.564 Zero copy mechanism will not be used. 00:15:21.823 [2024-11-06 15:13:50.950021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.823 [2024-11-06 15:13:51.001506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.823 15:13:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.823 15:13:51 -- common/autotest_common.sh@862 -- # return 0 00:15:21.823 15:13:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:21.823 15:13:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:21.823 15:13:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:22.390 15:13:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:22.390 15:13:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:22.390 nvme0n1 00:15:22.390 15:13:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:22.390 15:13:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:22.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:22.649 Zero copy mechanism will not be used. 00:15:22.649 Running I/O for 2 seconds... 00:15:24.552 00:15:24.552 Latency(us) 00:15:24.552 [2024-11-06T15:13:53.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.552 [2024-11-06T15:13:53.827Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:15:24.552 nvme0n1 : 2.00 8373.92 1046.74 0.00 0.00 1907.88 1653.29 4349.21 00:15:24.552 [2024-11-06T15:13:53.827Z] =================================================================================================================== 00:15:24.552 [2024-11-06T15:13:53.827Z] Total : 8373.92 1046.74 0.00 0.00 1907.88 1653.29 4349.21 00:15:24.552 0 00:15:24.552 15:13:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:24.552 15:13:53 -- host/digest.sh@92 -- # get_accel_stats 00:15:24.552 15:13:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:24.552 15:13:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:24.552 | select(.opcode=="crc32c") 00:15:24.552 | "\(.module_name) \(.executed)"' 00:15:24.552 15:13:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:24.811 15:13:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:24.811 15:13:54 -- host/digest.sh@93 -- # exp_module=software 00:15:24.811 15:13:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:24.811 15:13:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:24.811 15:13:54 -- host/digest.sh@97 -- # killprocess 71748 00:15:24.811 15:13:54 -- common/autotest_common.sh@936 -- # '[' -z 71748 ']' 00:15:24.811 15:13:54 -- common/autotest_common.sh@940 -- # kill -0 71748 00:15:24.811 15:13:54 -- common/autotest_common.sh@941 -- # uname 00:15:24.811 15:13:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.811 15:13:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71748 00:15:25.069 15:13:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:25.069 killing process with pid 71748 00:15:25.069 15:13:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:25.069 15:13:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71748' 00:15:25.069 Received shutdown signal, test time was about 2.000000 seconds 00:15:25.069 00:15:25.069 Latency(us) 00:15:25.069 [2024-11-06T15:13:54.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.069 [2024-11-06T15:13:54.344Z] =================================================================================================================== 00:15:25.069 [2024-11-06T15:13:54.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.069 15:13:54 -- common/autotest_common.sh@955 -- # kill 71748 00:15:25.069 15:13:54 -- common/autotest_common.sh@960 -- # wait 71748 00:15:25.069 15:13:54 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:15:25.070 15:13:54 -- host/digest.sh@77 -- # local rw bs qd 00:15:25.070 15:13:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:25.070 15:13:54 -- host/digest.sh@80 -- # rw=randwrite 00:15:25.070 15:13:54 -- host/digest.sh@80 -- # bs=4096 00:15:25.070 15:13:54 -- host/digest.sh@80 -- # qd=128 00:15:25.070 15:13:54 -- host/digest.sh@82 -- # bperfpid=71795 00:15:25.070 15:13:54 -- host/digest.sh@83 -- # waitforlisten 71795 /var/tmp/bperf.sock 00:15:25.070 15:13:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:25.070 15:13:54 -- common/autotest_common.sh@829 -- # '[' -z 71795 ']' 00:15:25.070 15:13:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:25.070 15:13:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.070 15:13:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:25.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:25.070 15:13:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.070 15:13:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.070 [2024-11-06 15:13:54.337109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:25.070 [2024-11-06 15:13:54.337774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71795 ] 00:15:25.328 [2024-11-06 15:13:54.470283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.328 [2024-11-06 15:13:54.522078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.263 15:13:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.263 15:13:55 -- common/autotest_common.sh@862 -- # return 0 00:15:26.263 15:13:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:26.263 15:13:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:26.263 15:13:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:26.521 15:13:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:26.521 15:13:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:26.779 nvme0n1 00:15:26.779 15:13:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:26.779 15:13:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:26.779 Running I/O for 2 seconds... 00:15:29.308 00:15:29.308 Latency(us) 00:15:29.308 [2024-11-06T15:13:58.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.308 [2024-11-06T15:13:58.583Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.308 nvme0n1 : 2.01 17393.62 67.94 0.00 0.00 7352.48 6672.76 15490.33 00:15:29.308 [2024-11-06T15:13:58.583Z] =================================================================================================================== 00:15:29.308 [2024-11-06T15:13:58.583Z] Total : 17393.62 67.94 0.00 0.00 7352.48 6672.76 15490.33 00:15:29.308 0 00:15:29.308 15:13:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:29.308 15:13:58 -- host/digest.sh@92 -- # get_accel_stats 00:15:29.308 15:13:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:29.308 15:13:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:29.308 15:13:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:29.308 | select(.opcode=="crc32c") 00:15:29.308 | "\(.module_name) \(.executed)"' 00:15:29.308 15:13:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:29.308 15:13:58 -- host/digest.sh@93 -- # exp_module=software 00:15:29.308 15:13:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:29.308 15:13:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:29.308 15:13:58 -- host/digest.sh@97 -- # killprocess 71795 00:15:29.308 15:13:58 -- common/autotest_common.sh@936 -- # '[' -z 71795 ']' 00:15:29.308 15:13:58 -- common/autotest_common.sh@940 -- # kill -0 71795 00:15:29.308 15:13:58 -- common/autotest_common.sh@941 -- # uname 00:15:29.308 15:13:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.308 15:13:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71795 00:15:29.308 15:13:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:29.308 killing process with pid 71795 00:15:29.308 15:13:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:29.308 15:13:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71795' 00:15:29.308 Received shutdown signal, test time was about 2.000000 seconds 00:15:29.308 00:15:29.308 Latency(us) 00:15:29.308 [2024-11-06T15:13:58.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.308 [2024-11-06T15:13:58.583Z] =================================================================================================================== 00:15:29.308 [2024-11-06T15:13:58.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.308 15:13:58 -- common/autotest_common.sh@955 -- # kill 71795 00:15:29.308 15:13:58 -- common/autotest_common.sh@960 -- # wait 71795 00:15:29.308 15:13:58 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:15:29.308 15:13:58 -- host/digest.sh@77 -- # local rw bs qd 00:15:29.308 15:13:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:29.308 15:13:58 -- host/digest.sh@80 -- # rw=randwrite 00:15:29.308 15:13:58 -- host/digest.sh@80 -- # bs=131072 00:15:29.308 15:13:58 -- host/digest.sh@80 -- # qd=16 00:15:29.308 15:13:58 -- host/digest.sh@82 -- # bperfpid=71852 00:15:29.308 15:13:58 -- host/digest.sh@83 -- # waitforlisten 71852 /var/tmp/bperf.sock 00:15:29.308 15:13:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:29.308 15:13:58 -- common/autotest_common.sh@829 -- # '[' -z 71852 ']' 00:15:29.308 15:13:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:29.308 15:13:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:29.308 15:13:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:29.308 15:13:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.308 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:15:29.566 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:29.566 Zero copy mechanism will not be used. 00:15:29.566 [2024-11-06 15:13:58.623717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:29.566 [2024-11-06 15:13:58.623833] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71852 ] 00:15:29.566 [2024-11-06 15:13:58.760460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.566 [2024-11-06 15:13:58.815142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.824 15:13:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.824 15:13:58 -- common/autotest_common.sh@862 -- # return 0 00:15:29.824 15:13:58 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:29.824 15:13:58 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:29.824 15:13:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:30.082 15:13:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:30.082 15:13:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:30.341 nvme0n1 00:15:30.341 15:13:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:30.341 15:13:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:30.341 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:30.341 Zero copy mechanism will not be used. 00:15:30.341 Running I/O for 2 seconds... 00:15:32.871 00:15:32.871 Latency(us) 00:15:32.871 [2024-11-06T15:14:02.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.871 [2024-11-06T15:14:02.146Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:15:32.871 nvme0n1 : 2.00 6955.03 869.38 0.00 0.00 2295.59 1623.51 5481.19 00:15:32.871 [2024-11-06T15:14:02.146Z] =================================================================================================================== 00:15:32.871 [2024-11-06T15:14:02.146Z] Total : 6955.03 869.38 0.00 0.00 2295.59 1623.51 5481.19 00:15:32.871 0 00:15:32.871 15:14:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:32.871 15:14:01 -- host/digest.sh@92 -- # get_accel_stats 00:15:32.871 15:14:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:32.871 15:14:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:32.871 15:14:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:32.871 | select(.opcode=="crc32c") 00:15:32.871 | "\(.module_name) \(.executed)"' 00:15:32.871 15:14:01 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:32.871 15:14:01 -- host/digest.sh@93 -- # exp_module=software 00:15:32.871 15:14:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:32.871 15:14:01 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:32.871 15:14:01 -- host/digest.sh@97 -- # killprocess 71852 00:15:32.871 15:14:01 -- common/autotest_common.sh@936 -- # '[' -z 71852 ']' 00:15:32.871 15:14:01 -- common/autotest_common.sh@940 -- # kill -0 71852 00:15:32.871 15:14:01 -- common/autotest_common.sh@941 -- # uname 00:15:32.871 15:14:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.871 15:14:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71852 00:15:32.871 killing process with pid 71852 00:15:32.871 Received shutdown signal, test time was about 2.000000 seconds 00:15:32.871 00:15:32.871 Latency(us) 00:15:32.871 [2024-11-06T15:14:02.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.871 [2024-11-06T15:14:02.146Z] =================================================================================================================== 00:15:32.871 [2024-11-06T15:14:02.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.871 15:14:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:32.871 15:14:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:32.871 15:14:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71852' 00:15:32.871 15:14:01 -- common/autotest_common.sh@955 -- # kill 71852 00:15:32.871 15:14:01 -- common/autotest_common.sh@960 -- # wait 71852 00:15:32.871 15:14:02 -- host/digest.sh@126 -- # killprocess 71659 00:15:32.871 15:14:02 -- common/autotest_common.sh@936 -- # '[' -z 71659 ']' 00:15:32.871 15:14:02 -- common/autotest_common.sh@940 -- # kill -0 71659 00:15:32.871 15:14:02 -- common/autotest_common.sh@941 -- # uname 00:15:32.871 15:14:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.871 15:14:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71659 00:15:32.871 killing process with pid 71659 00:15:32.871 15:14:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:32.871 15:14:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:32.871 15:14:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71659' 00:15:32.871 15:14:02 -- common/autotest_common.sh@955 -- # kill 71659 00:15:32.871 15:14:02 -- common/autotest_common.sh@960 -- # wait 71659 00:15:33.130 ************************************ 00:15:33.130 END TEST nvmf_digest_clean 00:15:33.130 ************************************ 00:15:33.130 00:15:33.130 real 0m16.185s 00:15:33.130 user 0m31.796s 00:15:33.130 sys 0m4.373s 00:15:33.130 15:14:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.130 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.130 15:14:02 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:15:33.130 15:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:33.130 15:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.130 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.130 ************************************ 00:15:33.130 START TEST nvmf_digest_error 00:15:33.130 ************************************ 00:15:33.130 15:14:02 -- common/autotest_common.sh@1114 -- # run_digest_error 00:15:33.130 15:14:02 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:15:33.130 15:14:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:33.130 15:14:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.130 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.130 15:14:02 -- nvmf/common.sh@469 -- # nvmfpid=71932 00:15:33.130 15:14:02 -- nvmf/common.sh@470 -- # waitforlisten 71932 00:15:33.130 15:14:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:33.130 15:14:02 -- common/autotest_common.sh@829 -- # '[' -z 71932 ']' 00:15:33.130 15:14:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.130 15:14:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.130 15:14:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.130 15:14:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.130 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.130 [2024-11-06 15:14:02.395121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.130 [2024-11-06 15:14:02.395212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.388 [2024-11-06 15:14:02.524530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.388 [2024-11-06 15:14:02.574224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.388 [2024-11-06 15:14:02.574390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.388 [2024-11-06 15:14:02.574402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.388 [2024-11-06 15:14:02.574410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.388 [2024-11-06 15:14:02.574438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.352 15:14:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.352 15:14:03 -- common/autotest_common.sh@862 -- # return 0 00:15:34.352 15:14:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:34.352 15:14:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.352 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.352 15:14:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.352 15:14:03 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:15:34.352 15:14:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.352 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.352 [2024-11-06 15:14:03.386959] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:15:34.352 15:14:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.352 15:14:03 -- host/digest.sh@104 -- # common_target_config 00:15:34.352 15:14:03 -- host/digest.sh@43 -- # rpc_cmd 00:15:34.352 15:14:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.352 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.352 null0 00:15:34.352 [2024-11-06 15:14:03.457766] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.352 [2024-11-06 15:14:03.481933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.352 15:14:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.352 15:14:03 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:15:34.352 15:14:03 -- host/digest.sh@54 -- # local rw bs qd 00:15:34.353 15:14:03 -- host/digest.sh@56 -- # rw=randread 00:15:34.353 15:14:03 -- host/digest.sh@56 -- # bs=4096 00:15:34.353 15:14:03 -- host/digest.sh@56 -- # qd=128 00:15:34.353 15:14:03 -- host/digest.sh@58 -- # bperfpid=71961 00:15:34.353 15:14:03 -- host/digest.sh@60 -- # waitforlisten 71961 /var/tmp/bperf.sock 00:15:34.353 15:14:03 -- common/autotest_common.sh@829 -- # '[' -z 71961 ']' 00:15:34.353 15:14:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:34.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:34.353 15:14:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.353 15:14:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:15:34.353 15:14:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:34.353 15:14:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.353 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.353 [2024-11-06 15:14:03.545133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:34.353 [2024-11-06 15:14:03.545249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71961 ] 00:15:34.628 [2024-11-06 15:14:03.686328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.629 [2024-11-06 15:14:03.755059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.565 15:14:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.565 15:14:04 -- common/autotest_common.sh@862 -- # return 0 00:15:35.565 15:14:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:35.565 15:14:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:35.565 15:14:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:35.565 15:14:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.565 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:15:35.565 15:14:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.565 15:14:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:35.565 15:14:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:35.823 nvme0n1 00:15:35.823 15:14:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:15:35.823 15:14:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.823 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.823 15:14:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.823 15:14:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:35.823 15:14:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:36.082 Running I/O for 2 seconds... 00:15:36.082 [2024-11-06 15:14:05.210110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.210176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.210206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.225714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.225768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.225798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.240983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.241035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.241063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.256184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.256235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.256264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.271231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.271294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.271308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.286246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.286299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.286327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.301509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.301562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.301591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.317763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.317827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.317856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.332468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.332520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.332548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.082 [2024-11-06 15:14:05.347284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.082 [2024-11-06 15:14:05.347344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.082 [2024-11-06 15:14:05.347373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.362660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.362721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.377141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.377193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.377221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.393024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.393077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.393105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.409810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.409864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.409893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.425752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.425804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.425848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.440901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.440952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.440980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.456445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.456498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.456528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.473925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.473976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.474005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.490905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.490958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.490988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.506781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.506850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.506879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.521882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.521933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.521962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.537084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.537136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.537164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.552284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.552336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.552364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.567672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.567733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.567762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.583338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.583393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.583423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.598451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.598502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.598530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.342 [2024-11-06 15:14:05.614737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.342 [2024-11-06 15:14:05.614795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.342 [2024-11-06 15:14:05.614808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.631846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.631898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.631926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.648258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.648312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.648340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.665449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.665505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.665520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.682285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.682337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.682366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.698634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.698696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.698726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.714502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.714555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.714584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.730357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.730400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.730414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.746070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.746121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.601 [2024-11-06 15:14:05.746150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.601 [2024-11-06 15:14:05.764117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.601 [2024-11-06 15:14:05.764169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.764198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.602 [2024-11-06 15:14:05.782203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.602 [2024-11-06 15:14:05.782238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.782267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.602 [2024-11-06 15:14:05.800160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.602 [2024-11-06 15:14:05.800212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.800246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.602 [2024-11-06 15:14:05.816711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.602 [2024-11-06 15:14:05.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.816792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.602 [2024-11-06 15:14:05.833025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.602 [2024-11-06 15:14:05.833079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.833108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.602 [2024-11-06 15:14:05.848884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.602 [2024-11-06 15:14:05.848922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.848951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.602 [2024-11-06 15:14:05.865867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.602 [2024-11-06 15:14:05.865918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.602 [2024-11-06 15:14:05.865946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.883969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.884021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.884049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.901367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.901437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.919189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.919263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.919278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.936501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.936544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.936558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.953252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.953303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.953332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.968724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.968774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.968801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.983831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.983881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.983909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:05.998973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:05.999022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:05.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.014001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.014038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.014066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.028945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.028994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.029022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.044069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.044119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.044147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.059093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.059143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.059171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.074010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.074061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.074104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.089463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.089516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.089528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.104665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.104724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.104752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.119679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.119739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.119783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.861 [2024-11-06 15:14:06.135013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:36.861 [2024-11-06 15:14:06.135065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.861 [2024-11-06 15:14:06.135093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.120 [2024-11-06 15:14:06.150397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.120 [2024-11-06 15:14:06.150449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.120 [2024-11-06 15:14:06.150477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.120 [2024-11-06 15:14:06.165514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.120 [2024-11-06 15:14:06.165565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.120 [2024-11-06 15:14:06.165593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.120 [2024-11-06 15:14:06.181725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.120 [2024-11-06 15:14:06.181771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.120 [2024-11-06 15:14:06.181804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.120 [2024-11-06 15:14:06.197825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.120 [2024-11-06 15:14:06.197876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.120 [2024-11-06 15:14:06.197903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.120 [2024-11-06 15:14:06.219180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.120 [2024-11-06 15:14:06.219231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.120 [2024-11-06 15:14:06.219298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.120 [2024-11-06 15:14:06.234612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.234692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.234707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.249563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.249615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.249643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.264634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.264694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.264723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.279580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.279632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.279674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.294495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.294545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.294572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.309584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.309634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.309662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.324580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.324631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.324659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.339874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.339923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.339951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.355547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.355627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.355655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.370953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.371004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.371032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.121 [2024-11-06 15:14:06.385878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.121 [2024-11-06 15:14:06.385928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.121 [2024-11-06 15:14:06.385955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.401677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.401727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.401755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.416736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.416797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.416825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.431503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.431554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.431582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.446319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.446374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.446402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.460978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.461027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.461055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.476391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.476459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.476488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.494081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.494133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.494161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.511172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.511224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.511277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.526875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.526927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.526956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.541945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.541996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.542024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.557206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.557259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.557287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.572485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.572537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.572566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.587548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.587601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.587629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.602775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.602843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.602855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.618242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.618294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.633413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.633464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.633492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.380 [2024-11-06 15:14:06.648585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.380 [2024-11-06 15:14:06.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.380 [2024-11-06 15:14:06.648664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.639 [2024-11-06 15:14:06.664274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.639 [2024-11-06 15:14:06.664328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.639 [2024-11-06 15:14:06.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.639 [2024-11-06 15:14:06.679505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.639 [2024-11-06 15:14:06.679559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.639 [2024-11-06 15:14:06.679589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.639 [2024-11-06 15:14:06.694574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.639 [2024-11-06 15:14:06.694627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.639 [2024-11-06 15:14:06.694655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.639 [2024-11-06 15:14:06.710529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.639 [2024-11-06 15:14:06.710583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.639 [2024-11-06 15:14:06.710614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.639 [2024-11-06 15:14:06.727186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.727259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.727288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.743399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.743455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.743470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.758457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.758508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.758535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.773438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.773549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.790445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.790501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.790533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.808301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.808337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.808365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.825566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.825618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.825645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.841517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.841598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.841628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.857071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.857124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.857153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.873426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.873479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.873508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.888806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.888841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.888869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.640 [2024-11-06 15:14:06.904040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.640 [2024-11-06 15:14:06.904105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.640 [2024-11-06 15:14:06.904133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:06.921634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:06.921698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:06.921743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:06.938169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:06.938220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:06.938248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:06.953752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:06.953803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:06.953831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:06.969368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:06.969445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:06.969473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:06.984620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:06.984695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:06.984710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:06.999913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:06.999963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:06.999991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.015428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.015494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.015524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.030671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.030745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.030776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.045881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.045932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.045960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.061127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.061177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.061205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.076446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.076496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.076524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.091582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.091632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.091674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.106128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.106204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.106234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.121637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.121723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.121754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.136530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.136581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.136608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.151089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.151139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.151167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:37.899 [2024-11-06 15:14:07.165615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:37.899 [2024-11-06 15:14:07.165687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:37.899 [2024-11-06 15:14:07.165701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:38.158 [2024-11-06 15:14:07.181118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:38.158 [2024-11-06 15:14:07.181168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.158 [2024-11-06 15:14:07.181195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:38.158 [2024-11-06 15:14:07.195323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2311d40) 00:15:38.158 [2024-11-06 15:14:07.195376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.158 [2024-11-06 15:14:07.195405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:38.158 00:15:38.158 Latency(us) 00:15:38.158 [2024-11-06T15:14:07.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.158 [2024-11-06T15:14:07.433Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:38.158 nvme0n1 : 2.01 16114.76 62.95 0.00 0.00 7937.16 6762.12 29193.31 00:15:38.158 [2024-11-06T15:14:07.433Z] =================================================================================================================== 00:15:38.158 [2024-11-06T15:14:07.433Z] Total : 16114.76 62.95 0.00 0.00 7937.16 6762.12 29193.31 00:15:38.158 0 00:15:38.158 15:14:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:38.158 15:14:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:38.158 | .driver_specific 00:15:38.158 | .nvme_error 00:15:38.158 | .status_code 00:15:38.158 | .command_transient_transport_error' 00:15:38.158 15:14:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:38.158 15:14:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:38.417 15:14:07 -- host/digest.sh@71 -- # (( 127 > 0 )) 00:15:38.417 15:14:07 -- host/digest.sh@73 -- # killprocess 71961 00:15:38.417 15:14:07 -- common/autotest_common.sh@936 -- # '[' -z 71961 ']' 00:15:38.417 15:14:07 -- common/autotest_common.sh@940 -- # kill -0 71961 00:15:38.417 15:14:07 -- common/autotest_common.sh@941 -- # uname 00:15:38.417 15:14:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.417 15:14:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71961 00:15:38.417 15:14:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:38.417 15:14:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:38.417 killing process with pid 71961 00:15:38.417 15:14:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71961' 00:15:38.417 Received shutdown signal, test time was about 2.000000 seconds 00:15:38.417 00:15:38.417 Latency(us) 00:15:38.417 [2024-11-06T15:14:07.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.417 [2024-11-06T15:14:07.692Z] =================================================================================================================== 00:15:38.417 [2024-11-06T15:14:07.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.417 15:14:07 -- common/autotest_common.sh@955 -- # kill 71961 00:15:38.417 15:14:07 -- common/autotest_common.sh@960 -- # wait 71961 00:15:38.676 15:14:07 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:15:38.676 15:14:07 -- host/digest.sh@54 -- # local rw bs qd 00:15:38.676 15:14:07 -- host/digest.sh@56 -- # rw=randread 00:15:38.676 15:14:07 -- host/digest.sh@56 -- # bs=131072 00:15:38.676 15:14:07 -- host/digest.sh@56 -- # qd=16 00:15:38.676 15:14:07 -- host/digest.sh@58 -- # bperfpid=72020 00:15:38.676 15:14:07 -- host/digest.sh@60 -- # waitforlisten 72020 /var/tmp/bperf.sock 00:15:38.676 15:14:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:15:38.676 15:14:07 -- common/autotest_common.sh@829 -- # '[' -z 72020 ']' 00:15:38.676 15:14:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:38.676 15:14:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:38.676 15:14:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:38.676 15:14:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.676 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:15:38.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:38.676 Zero copy mechanism will not be used. 00:15:38.676 [2024-11-06 15:14:07.766285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.676 [2024-11-06 15:14:07.766396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72020 ] 00:15:38.676 [2024-11-06 15:14:07.904097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.935 [2024-11-06 15:14:07.962113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.502 15:14:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.502 15:14:08 -- common/autotest_common.sh@862 -- # return 0 00:15:39.502 15:14:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:39.502 15:14:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:39.761 15:14:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:39.761 15:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.761 15:14:08 -- common/autotest_common.sh@10 -- # set +x 00:15:39.761 15:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.761 15:14:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:39.761 15:14:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:40.020 nvme0n1 00:15:40.020 15:14:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:15:40.020 15:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.020 15:14:09 -- common/autotest_common.sh@10 -- # set +x 00:15:40.020 15:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.020 15:14:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:40.020 15:14:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:40.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:40.280 Zero copy mechanism will not be used. 00:15:40.280 Running I/O for 2 seconds... 00:15:40.280 [2024-11-06 15:14:09.393999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.394096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.398296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.398350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.398379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.402446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.402500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.402529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.406879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.406931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.406960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.410934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.410987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.411016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.415101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.415152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.415181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.419404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.419444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.419473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.423753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.423805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.423833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.428005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.428057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.428085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.432052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.432103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.280 [2024-11-06 15:14:09.432131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.280 [2024-11-06 15:14:09.436203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.280 [2024-11-06 15:14:09.436256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.436284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.440414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.440467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.440495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.444652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.444714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.444744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.448814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.448867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.448895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.452944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.452995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.453024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.457088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.457140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.457168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.461270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.461354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.465463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.465546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.469657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.469722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.469752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.473868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.473905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.473934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.477969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.478006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.478034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.482104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.482156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.482185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.486209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.486261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.486289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.490446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.490498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.490527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.494526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.494578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.494606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.498703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.498754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.498783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.502797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.502848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.502876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.506872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.506923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.506952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.510987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.511039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.511068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.515120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.515171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.515199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.519267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.519322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.519336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.523378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.523433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.523462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.527543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.527582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.527611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.531868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.531922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.531965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.536515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.536555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.536570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.541246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.541300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.541329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.546092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.546146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.281 [2024-11-06 15:14:09.546176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.281 [2024-11-06 15:14:09.550982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.281 [2024-11-06 15:14:09.551035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.282 [2024-11-06 15:14:09.551063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.555607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.555648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.555675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.560261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.560299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.560327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.565025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.565077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.565105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.569441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.569482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.569495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.574116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.574168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.574196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.578623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.579430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.579460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.583989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.584030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.584058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.588583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.588625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.588640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.593124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.593176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.593204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.597549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.597590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.597605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.602036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.602087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.602115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.606593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.606635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.606648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.611363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.611405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.611420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.616060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.616127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.616155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.620459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.620525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.620540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.624975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.625027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.625055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.629295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.629346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.543 [2024-11-06 15:14:09.629375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.543 [2024-11-06 15:14:09.633772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.543 [2024-11-06 15:14:09.633823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.633851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.637985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.638037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.638065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.642182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.642235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.642263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.646346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.646398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.646426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.650559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.650609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.650638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.654697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.654748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.654776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.658758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.658809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.658837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.662950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.663001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.663029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.667053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.667104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.667132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.671106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.671185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.675307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.675375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.679297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.679335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.679364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.683330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.683368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.683396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.687347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.687385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.687413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.691487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.691528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.691555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.695741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.695792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.695819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.699845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.699895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.699922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.703960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.704011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.704039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.708023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.708074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.708102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.712097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.712147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.712174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.716189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.716241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.716268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.720257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.720308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.720336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.724391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.724443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.724471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.728557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.728609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.728637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.732642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.732703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.732732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.736747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.736798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.736826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.740887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.740940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.740968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.745047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.745100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.745128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.749177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.749230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.749257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.753247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.753299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.753327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.757549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.757615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.761828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.761880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.761907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.765924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.766002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.769982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.770047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.770061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.774107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.774159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.778151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.778202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.778230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.782304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.782355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.782383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.786420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.786472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.786499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.790691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.790754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.790783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.794795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.794846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.794875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.798955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.799006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.544 [2024-11-06 15:14:09.799034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.544 [2024-11-06 15:14:09.802959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.544 [2024-11-06 15:14:09.803011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.545 [2024-11-06 15:14:09.803039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.545 [2024-11-06 15:14:09.807009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.545 [2024-11-06 15:14:09.807060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.545 [2024-11-06 15:14:09.807102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.545 [2024-11-06 15:14:09.811011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.545 [2024-11-06 15:14:09.811062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.545 [2024-11-06 15:14:09.811090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.545 [2024-11-06 15:14:09.815305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.545 [2024-11-06 15:14:09.815343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.545 [2024-11-06 15:14:09.815355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.819535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.819574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.819618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.823871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.823922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.823950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.828030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.828081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.828108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.832059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.832110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.832138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.836179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.836230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.836258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.840194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.840244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.844581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.844632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.844660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.848741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.848791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.848819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.853103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.853155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.853184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.857472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.857525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.857555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.862179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.862233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.862262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.867091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.867161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.867191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.871936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.871991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.872006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.876639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.876722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.876737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.881476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.881529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.804 [2024-11-06 15:14:09.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.804 [2024-11-06 15:14:09.886059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.804 [2024-11-06 15:14:09.886111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.886139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.890547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.890600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.890628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.894877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.894915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.894953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.898942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.898979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.899007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.902902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.902939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.902967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.906872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.906910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.906938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.910775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.910811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.910839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.914809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.914846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.914875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.918784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.918819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.918848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.922788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.922825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.926913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.926950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.926979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.930864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.930901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.930929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.934806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.934843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.934870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.938750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.938784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.938811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.942808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.942844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.942872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.946851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.946887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.946915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.950835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.950871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.950899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.954795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.954832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.954860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.958847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.958883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.958911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.962906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.962945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.962972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.966847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.966883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.966912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.970788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.970823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.970852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.974915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.974953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.978913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.978949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.978977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.982874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.982911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.982940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.986839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.986876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.986904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.990807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.990843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.805 [2024-11-06 15:14:09.990871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.805 [2024-11-06 15:14:09.994831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.805 [2024-11-06 15:14:09.994867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:09.994894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:09.998831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:09.998867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:09.998895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.003107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.003163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.007195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.007233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.007287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.011530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.011572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.011586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.015846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.015883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.015911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.020406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.020463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.020477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.024748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.024784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.024812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.028960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.028998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.029026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.033208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.033246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.033274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.037394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.037446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.037474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.041595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.041646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.041685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.045770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.045821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.045848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.049974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.050026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.050055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.054182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.054234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.054262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.058321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.058374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.058402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.062517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.062570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.062598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.066781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.066833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.066861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.071250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.071309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.071322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.806 [2024-11-06 15:14:10.075885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:40.806 [2024-11-06 15:14:10.075921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.806 [2024-11-06 15:14:10.075949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.080451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.080506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.080551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.085119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.085172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.085201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.089307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.089358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.089386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.093566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.093617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.093646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.097722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.097772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.097800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.101871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.101923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.101951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.105962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.106013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.106041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.110042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.110093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.110121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.114261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.114313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.114341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.118472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.118524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.118552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.122600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.122637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.122679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.127031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.127099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.127127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.131172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.131224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.131295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.135197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.135273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.135304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.139140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.139190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.139219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.066 [2024-11-06 15:14:10.143139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.066 [2024-11-06 15:14:10.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.066 [2024-11-06 15:14:10.143218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.147275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.147315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.147344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.151394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.151433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.151448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.155516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.155556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.155585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.159770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.159820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.159848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.164014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.164064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.164093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.168096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.168147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.168175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.172257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.172309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.172336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.176361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.176441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.180539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.180606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.180634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.184838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.184890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.184918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.189037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.189091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.189120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.193281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.193334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.193361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.197479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.197532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.197560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.201754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.201822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.201834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.206183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.206236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.206264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.210915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.210967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.210997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.215290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.215331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.215345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.219804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.219841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.219870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.224210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.224263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.224292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.228400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.228452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.228481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.232912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.232965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.232994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.237412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.237465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.237478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.241725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.241776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.241804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.245992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.246044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.246072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.250546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.250600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.250629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.254873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.254924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.254953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.067 [2024-11-06 15:14:10.259180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.067 [2024-11-06 15:14:10.259231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.067 [2024-11-06 15:14:10.259285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.263466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.263506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.263519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.267970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.268022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.268050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.272076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.272129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.272157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.276183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.276234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.276262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.280467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.280520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.280549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.284757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.284809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.284837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.289219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.289272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.289301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.293613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.293692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.293707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.297887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.297958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.297972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.302083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.302136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.302164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.306251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.306304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.306332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.310619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.310696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.310711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.314755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.314788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.314816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.319210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.319305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.319335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.323684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.323764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.323794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.328163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.328216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.328245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.332615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.332692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.332707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.068 [2024-11-06 15:14:10.337062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.068 [2024-11-06 15:14:10.337099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.068 [2024-11-06 15:14:10.337127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.341526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.341577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.341606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.345896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.345934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.345947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.350067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.350134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.350162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.354177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.354229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.354257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.358339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.358406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.358434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.362576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.362628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.362657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.366673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.366735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.366765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.370993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.371032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.371075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.375474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.375516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.375529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.380354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.380393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.380423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.385329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.385367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.385396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.389921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.389972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.328 [2024-11-06 15:14:10.390002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.328 [2024-11-06 15:14:10.394516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.328 [2024-11-06 15:14:10.394570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.394599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.399113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.399166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.399194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.403741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.403798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.403812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.408165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.408218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.408246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.412750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.412803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.412832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.417174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.417227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.417255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.421395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.421447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.421476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.425924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.425976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.426005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.430156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.430208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.430238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.434348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.434416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.434444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.438583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.438624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.438638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.442764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.442832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.442861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.446961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.447013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.447041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.451157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.451210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.451245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.455551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.455636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.455665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.459805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.459856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.459884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.463973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.464025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.464053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.468315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.468367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.468396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.472545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.472597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.472625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.476802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.476854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.476883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.481006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.481059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.485229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.485282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.485311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.489531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.489583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.489611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.493829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.493880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.493909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.498284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.498338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.498368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.502493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.502547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.502575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.506708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.506759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.329 [2024-11-06 15:14:10.511125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.329 [2024-11-06 15:14:10.511177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.329 [2024-11-06 15:14:10.511205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.515267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.515308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.515338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.519555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.519625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.519668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.523893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.523944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.523973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.528208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.528261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.528290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.532436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.532488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.532516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.536619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.536699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.536714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.541114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.541167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.541195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.545485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.545537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.545566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.549746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.549825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.554201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.554253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.554282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.558528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.558583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.558628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.563151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.563203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.563232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.567970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.568027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.568041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.572445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.572500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.572531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.576915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.576953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.576983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.581583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.581623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.581654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.586167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.586204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.586234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.590550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.590590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.590620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.594887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.594923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.594952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.330 [2024-11-06 15:14:10.599126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.330 [2024-11-06 15:14:10.599177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.330 [2024-11-06 15:14:10.599207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.590 [2024-11-06 15:14:10.603748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.590 [2024-11-06 15:14:10.603793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.590 [2024-11-06 15:14:10.603823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.607763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.607815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.607845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.611969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.612005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.612035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.616174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.616227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.616257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.620352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.620390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.620420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.624427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.624464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.624494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.628542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.628579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.628610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.632766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.632803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.636815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.636853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.636866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.641091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.641130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.641160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.645520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.645556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.645586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.649756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.649792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.653838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.653873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.653903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.657971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.658008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.658038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.662131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.662169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.662199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.666269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.666307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.666352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.670553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.670590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.670621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.674715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.674748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.674761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.679197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.679398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.679419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.684288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.684331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.684345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.689167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.689205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.689235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.693447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.693484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.693514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.697808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.697846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.697876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.701849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.701885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.701915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.705906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.705959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.705973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.709992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.710029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.710058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.714107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.591 [2024-11-06 15:14:10.714174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.591 [2024-11-06 15:14:10.718349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.591 [2024-11-06 15:14:10.718385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.718415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.722412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.722448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.722478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.726451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.726488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.726518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.730508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.730544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.730574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.734565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.734602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.734631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.738572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.738609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.738639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.742545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.742582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.742611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.746709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.746745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.746774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.750822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.750858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.750888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.754769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.754805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.754834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.758769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.758805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.758834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.762745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.762780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.762809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.766790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.766825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.766855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.771072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.771109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.771139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.775407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.775450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.775466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.779823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.779859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.779888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.784106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.784143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.784173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.788867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.788903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.788933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.793483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.793521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.793552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.797830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.797867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.797897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.802139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.802176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.802205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.806323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.806360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.806389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.810428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.810467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.810497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.814449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.814488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.814518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.818620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.818686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.818718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.822640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.822685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.822714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.826708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.826744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.592 [2024-11-06 15:14:10.826774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.592 [2024-11-06 15:14:10.830715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.592 [2024-11-06 15:14:10.830750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.830780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.834791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.834827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.834856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.838804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.838840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.838869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.842826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.842893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.846973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.847010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.847040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.851056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.851093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.851123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.855019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.855055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.855086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.859154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.859190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.859220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.593 [2024-11-06 15:14:10.863551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.593 [2024-11-06 15:14:10.863621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.593 [2024-11-06 15:14:10.863651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.867932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.867969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.867999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.872390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.872428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.872458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.876755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.876824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.881190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.881238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.881268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.885639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.885739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.890532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.890575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.890592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.895049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.895101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.895131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.899658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.853 [2024-11-06 15:14:10.899725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.853 [2024-11-06 15:14:10.899743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.853 [2024-11-06 15:14:10.904407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.904445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.904475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.908938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.908978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.909010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.913465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.913531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.918115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.918182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.922430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.922630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.922664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.927146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.927184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.927214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.931339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.931410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.935528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.935599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.939667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.939733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.939779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.943873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.943907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.943936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.947930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.947966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.947996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.951924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.951959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.951989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.956023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.956060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.956091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.960251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.960289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.960319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.964370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.964406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.964436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.968478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.968515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.968546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.972614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.972650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.972689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.976536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.976572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.976601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.980705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.980741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.980770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.984699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.984736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.984765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.988630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.988694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.988726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.993087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.993125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.993155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:10.997654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:10.997753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:10.997770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:11.002189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:11.002370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:11.002405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:11.006908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:11.006946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:11.006975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:11.011034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.854 [2024-11-06 15:14:11.011087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.854 [2024-11-06 15:14:11.011116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.854 [2024-11-06 15:14:11.015114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.015150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.015179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.019248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.019303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.019333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.023362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.023403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.023417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.027405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.027445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.027476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.031621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.031701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.031731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.035757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.035792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.035835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.039798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.039833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.039862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.043853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.043888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.047966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.048002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.048032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.051950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.051986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.052015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.055929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.055965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.055994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.059903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.059939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.059968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.063932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.063968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.063998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.068075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.068111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.068140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.072048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.072085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.072114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.076070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.076106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.076134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.080109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.080146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.080174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.084114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.084151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.084180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.088176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.088212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.088241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.092365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.092401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.092432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.096413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.096450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.096479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.100441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.100478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.100508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.104543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.104579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.104608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.108678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.108715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.108744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.112772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.855 [2024-11-06 15:14:11.112807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.855 [2024-11-06 15:14:11.112837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:41.855 [2024-11-06 15:14:11.116717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.856 [2024-11-06 15:14:11.116752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.856 [2024-11-06 15:14:11.116780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:41.856 [2024-11-06 15:14:11.120753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.856 [2024-11-06 15:14:11.120788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.856 [2024-11-06 15:14:11.120817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:41.856 [2024-11-06 15:14:11.124961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:41.856 [2024-11-06 15:14:11.125000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.856 [2024-11-06 15:14:11.125031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.129330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.129368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.129414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.133700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.133786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.133817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.137892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.137929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.137959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.141906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.141943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.141973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.145860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.145896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.145926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.149857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.149893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.149922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.154003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.154039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.154070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.158296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.158334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.158377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.162745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.162813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.162843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.166865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.166900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.166929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.171132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.171169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.171198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.175232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.175298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.175314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.179273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.179315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.179330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.183560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.183618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.183648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.187821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.187856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.187885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.191883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.191919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.191948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.195995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.196032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.196061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.200249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.200288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.200318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.204958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.204994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.205023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.209425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.116 [2024-11-06 15:14:11.209496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.116 [2024-11-06 15:14:11.214010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.116 [2024-11-06 15:14:11.214047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.214076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.218723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.218819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.218849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.223097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.223134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.223164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.227910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.227947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.227976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.232247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.232284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.232314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.236949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.236985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.237015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.241319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.241356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.241403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.246106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.246144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.246174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.250616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.250916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.250951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.255625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.255722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.255738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.259964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.260001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.260030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.264345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.264399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.264444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.268859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.268895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.268924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.273223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.273261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.273289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.277816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.277853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.277883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.282177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.282214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.282243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.286665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.286732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.286748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.291004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.291041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.291071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.295063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.295099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.295129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.299172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.299208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.299245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.303219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.303280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.303296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.307578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.307646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.307706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.311783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.311819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.311848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.315833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.315869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.315898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.319898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.319933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.319962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.324054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.324090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.324120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.328164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.328200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.328230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.117 [2024-11-06 15:14:11.332479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.117 [2024-11-06 15:14:11.332525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.117 [2024-11-06 15:14:11.332556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.336963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.337000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.337029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.341556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.341597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.341627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.346137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.346173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.346203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.350720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.350816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.355115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.355153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.355182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.359561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.359646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.359675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.364149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.364187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.364216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.368387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.368585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.368619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.372805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.372872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.377101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.377139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.381272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.381311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.381341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.118 [2024-11-06 15:14:11.385374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d5940) 00:15:42.118 [2024-11-06 15:14:11.385413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.118 [2024-11-06 15:14:11.385443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.118 00:15:42.118 Latency(us) 00:15:42.118 [2024-11-06T15:14:11.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.118 [2024-11-06T15:14:11.393Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:15:42.118 nvme0n1 : 2.00 7250.49 906.31 0.00 0.00 2203.50 1720.32 7477.06 00:15:42.118 [2024-11-06T15:14:11.393Z] =================================================================================================================== 00:15:42.118 [2024-11-06T15:14:11.393Z] Total : 7250.49 906.31 0.00 0.00 2203.50 1720.32 7477.06 00:15:42.377 0 00:15:42.377 15:14:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:42.377 15:14:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:42.377 15:14:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:42.377 | .driver_specific 00:15:42.377 | .nvme_error 00:15:42.377 | .status_code 00:15:42.377 | .command_transient_transport_error' 00:15:42.377 15:14:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:42.636 15:14:11 -- host/digest.sh@71 -- # (( 468 > 0 )) 00:15:42.636 15:14:11 -- host/digest.sh@73 -- # killprocess 72020 00:15:42.636 15:14:11 -- common/autotest_common.sh@936 -- # '[' -z 72020 ']' 00:15:42.636 15:14:11 -- common/autotest_common.sh@940 -- # kill -0 72020 00:15:42.636 15:14:11 -- common/autotest_common.sh@941 -- # uname 00:15:42.636 15:14:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:42.636 15:14:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72020 00:15:42.636 killing process with pid 72020 00:15:42.636 Received shutdown signal, test time was about 2.000000 seconds 00:15:42.636 00:15:42.636 Latency(us) 00:15:42.636 [2024-11-06T15:14:11.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.636 [2024-11-06T15:14:11.911Z] =================================================================================================================== 00:15:42.636 [2024-11-06T15:14:11.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.636 15:14:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:42.636 15:14:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:42.636 15:14:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72020' 00:15:42.636 15:14:11 -- common/autotest_common.sh@955 -- # kill 72020 00:15:42.636 15:14:11 -- common/autotest_common.sh@960 -- # wait 72020 00:15:42.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:42.895 15:14:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:15:42.896 15:14:11 -- host/digest.sh@54 -- # local rw bs qd 00:15:42.896 15:14:11 -- host/digest.sh@56 -- # rw=randwrite 00:15:42.896 15:14:11 -- host/digest.sh@56 -- # bs=4096 00:15:42.896 15:14:11 -- host/digest.sh@56 -- # qd=128 00:15:42.896 15:14:11 -- host/digest.sh@58 -- # bperfpid=72086 00:15:42.896 15:14:11 -- host/digest.sh@60 -- # waitforlisten 72086 /var/tmp/bperf.sock 00:15:42.896 15:14:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:15:42.896 15:14:11 -- common/autotest_common.sh@829 -- # '[' -z 72086 ']' 00:15:42.896 15:14:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:42.896 15:14:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.896 15:14:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:42.896 15:14:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.896 15:14:11 -- common/autotest_common.sh@10 -- # set +x 00:15:42.896 [2024-11-06 15:14:11.971538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:42.896 [2024-11-06 15:14:11.971938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72086 ] 00:15:42.896 [2024-11-06 15:14:12.103897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.896 [2024-11-06 15:14:12.158324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.831 15:14:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.831 15:14:12 -- common/autotest_common.sh@862 -- # return 0 00:15:43.831 15:14:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:43.831 15:14:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:44.090 15:14:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:44.090 15:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.090 15:14:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.090 15:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.090 15:14:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:44.090 15:14:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:44.349 nvme0n1 00:15:44.349 15:14:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:15:44.349 15:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.349 15:14:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.349 15:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.349 15:14:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:44.349 15:14:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:44.608 Running I/O for 2 seconds... 00:15:44.608 [2024-11-06 15:14:13.722573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ddc00 00:15:44.608 [2024-11-06 15:14:13.724142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.724188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.738136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fef90 00:15:44.608 [2024-11-06 15:14:13.739633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.739723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.754494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ff3c8 00:15:44.608 [2024-11-06 15:14:13.755950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.755987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.770186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190feb58 00:15:44.608 [2024-11-06 15:14:13.771675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.771884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.785576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fe720 00:15:44.608 [2024-11-06 15:14:13.786953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.787154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.800815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fe2e8 00:15:44.608 [2024-11-06 15:14:13.802182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.802231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.815946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fdeb0 00:15:44.608 [2024-11-06 15:14:13.817218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.817251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.830301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fda78 00:15:44.608 [2024-11-06 15:14:13.831690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.831922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.844945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fd640 00:15:44.608 [2024-11-06 15:14:13.846339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.846552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.860133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fd208 00:15:44.608 [2024-11-06 15:14:13.861494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.861713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:44.608 [2024-11-06 15:14:13.876320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fcdd0 00:15:44.608 [2024-11-06 15:14:13.877808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.608 [2024-11-06 15:14:13.878002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.893365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fc998 00:15:44.879 [2024-11-06 15:14:13.894874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.895166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.909337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fc560 00:15:44.879 [2024-11-06 15:14:13.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.911103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.924966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fc128 00:15:44.879 [2024-11-06 15:14:13.926278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.926472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.941002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fbcf0 00:15:44.879 [2024-11-06 15:14:13.942558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.942783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.957954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fb8b8 00:15:44.879 [2024-11-06 15:14:13.959387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.959567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.975389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fb480 00:15:44.879 [2024-11-06 15:14:13.976857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.977164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:13.991554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fb048 00:15:44.879 [2024-11-06 15:14:13.993082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:13.993335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.008043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fac10 00:15:44.879 [2024-11-06 15:14:14.009639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.009716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.023461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fa7d8 00:15:44.879 [2024-11-06 15:14:14.024844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.024872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.038121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190fa3a0 00:15:44.879 [2024-11-06 15:14:14.039561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.039642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.052952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f9f68 00:15:44.879 [2024-11-06 15:14:14.054369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.054403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.067793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f9b30 00:15:44.879 [2024-11-06 15:14:14.068937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.068970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.082171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f96f8 00:15:44.879 [2024-11-06 15:14:14.083374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.083411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.096734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f92c0 00:15:44.879 [2024-11-06 15:14:14.097869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.097902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.111071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f8e88 00:15:44.879 [2024-11-06 15:14:14.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.879 [2024-11-06 15:14:14.112497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:44.879 [2024-11-06 15:14:14.125816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f8a50 00:15:44.880 [2024-11-06 15:14:14.127109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.880 [2024-11-06 15:14:14.127140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:44.880 [2024-11-06 15:14:14.141084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f8618 00:15:44.880 [2024-11-06 15:14:14.142492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.880 [2024-11-06 15:14:14.142573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.158451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f81e0 00:15:45.162 [2024-11-06 15:14:14.159584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.159700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.175405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f7da8 00:15:45.162 [2024-11-06 15:14:14.176560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.176601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.191050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f7970 00:15:45.162 [2024-11-06 15:14:14.192165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.192359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.205855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f7538 00:15:45.162 [2024-11-06 15:14:14.206959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.206993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.220505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f7100 00:15:45.162 [2024-11-06 15:14:14.221593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.221850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.235413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f6cc8 00:15:45.162 [2024-11-06 15:14:14.236623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.236882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.250337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f6890 00:15:45.162 [2024-11-06 15:14:14.251661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.251914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.265884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f6458 00:15:45.162 [2024-11-06 15:14:14.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.267362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:45.162 [2024-11-06 15:14:14.282313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f6020 00:15:45.162 [2024-11-06 15:14:14.283607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.162 [2024-11-06 15:14:14.283860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.297569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f5be8 00:15:45.163 [2024-11-06 15:14:14.298814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.299013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.312474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f57b0 00:15:45.163 [2024-11-06 15:14:14.313624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.313880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.327176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f5378 00:15:45.163 [2024-11-06 15:14:14.328337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.328533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.342143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f4f40 00:15:45.163 [2024-11-06 15:14:14.343319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.343502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.357093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f4b08 00:15:45.163 [2024-11-06 15:14:14.358064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.358101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.371724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f46d0 00:15:45.163 [2024-11-06 15:14:14.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.372872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.386206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f4298 00:15:45.163 [2024-11-06 15:14:14.387147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.387368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.400918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f3e60 00:15:45.163 [2024-11-06 15:14:14.402003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.402210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.416888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f3a28 00:15:45.163 [2024-11-06 15:14:14.418061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.418271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:45.163 [2024-11-06 15:14:14.433701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f35f0 00:15:45.163 [2024-11-06 15:14:14.434926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.163 [2024-11-06 15:14:14.435148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.450604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f31b8 00:15:45.422 [2024-11-06 15:14:14.451785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.452009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.467187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f2d80 00:15:45.422 [2024-11-06 15:14:14.468308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.468511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.483595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f2948 00:15:45.422 [2024-11-06 15:14:14.484739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.484987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.499229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f2510 00:15:45.422 [2024-11-06 15:14:14.500333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.500540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.513846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f20d8 00:15:45.422 [2024-11-06 15:14:14.514859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.514891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.528355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f1ca0 00:15:45.422 [2024-11-06 15:14:14.529156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.529193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.543784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f1868 00:15:45.422 [2024-11-06 15:14:14.544662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.544862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.558338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f1430 00:15:45.422 [2024-11-06 15:14:14.559114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.559153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.572701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f0ff8 00:15:45.422 [2024-11-06 15:14:14.573458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.573496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.586826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f0bc0 00:15:45.422 [2024-11-06 15:14:14.587850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.587879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.601163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f0788 00:15:45.422 [2024-11-06 15:14:14.601927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.601963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.615558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190f0350 00:15:45.422 [2024-11-06 15:14:14.616441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.616476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.630794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eff18 00:15:45.422 [2024-11-06 15:14:14.631827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.647574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190efae0 00:15:45.422 [2024-11-06 15:14:14.648418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.648456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.663160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ef6a8 00:15:45.422 [2024-11-06 15:14:14.664005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.664042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.677597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ef270 00:15:45.422 [2024-11-06 15:14:14.678360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.678410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:45.422 [2024-11-06 15:14:14.691524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eee38 00:15:45.422 [2024-11-06 15:14:14.692367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.422 [2024-11-06 15:14:14.692397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:45.681 [2024-11-06 15:14:14.706940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eea00 00:15:45.681 [2024-11-06 15:14:14.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.681 [2024-11-06 15:14:14.707904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.681 [2024-11-06 15:14:14.721028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ee5c8 00:15:45.681 [2024-11-06 15:14:14.721691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.681 [2024-11-06 15:14:14.721756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:45.681 [2024-11-06 15:14:14.734890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ee190 00:15:45.681 [2024-11-06 15:14:14.735550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.681 [2024-11-06 15:14:14.735572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:45.681 [2024-11-06 15:14:14.749343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190edd58 00:15:45.681 [2024-11-06 15:14:14.750041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.681 [2024-11-06 15:14:14.750078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:45.681 [2024-11-06 15:14:14.765108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ed920 00:15:45.681 [2024-11-06 15:14:14.766145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.681 [2024-11-06 15:14:14.766191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:45.681 [2024-11-06 15:14:14.780634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ed4e8 00:15:45.682 [2024-11-06 15:14:14.781422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.781458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.795518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ed0b0 00:15:45.682 [2024-11-06 15:14:14.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.796533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.810373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ecc78 00:15:45.682 [2024-11-06 15:14:14.811038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.811119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.826463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ec840 00:15:45.682 [2024-11-06 15:14:14.827330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.827363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.842373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ec408 00:15:45.682 [2024-11-06 15:14:14.843082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.843120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.857673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ebfd0 00:15:45.682 [2024-11-06 15:14:14.858285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.858322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.872901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ebb98 00:15:45.682 [2024-11-06 15:14:14.873544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.888026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eb760 00:15:45.682 [2024-11-06 15:14:14.888600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.888637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.903457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eb328 00:15:45.682 [2024-11-06 15:14:14.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.904290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.918593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eaef0 00:15:45.682 [2024-11-06 15:14:14.919442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.919474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.934046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190eaab8 00:15:45.682 [2024-11-06 15:14:14.934904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.934933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:45.682 [2024-11-06 15:14:14.948623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ea680 00:15:45.682 [2024-11-06 15:14:14.949281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.682 [2024-11-06 15:14:14.949325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:14.964636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190ea248 00:15:45.941 [2024-11-06 15:14:14.965205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:14.965242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:14.981536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e9e10 00:15:45.941 [2024-11-06 15:14:14.982138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:14.982192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:14.996835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e99d8 00:15:45.941 [2024-11-06 15:14:14.997313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:14.997369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.011903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e95a0 00:15:45.941 [2024-11-06 15:14:15.012435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.012477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.029347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e9168 00:15:45.941 [2024-11-06 15:14:15.029968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.030006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.045264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e8d30 00:15:45.941 [2024-11-06 15:14:15.045715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.045788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.061139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e88f8 00:15:45.941 [2024-11-06 15:14:15.061584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.061625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.076232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e84c0 00:15:45.941 [2024-11-06 15:14:15.076739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.076767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.091143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e8088 00:15:45.941 [2024-11-06 15:14:15.091812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.091845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.106080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e7c50 00:15:45.941 [2024-11-06 15:14:15.106663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.106702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.121100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e7818 00:15:45.941 [2024-11-06 15:14:15.121495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.135880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e73e0 00:15:45.941 [2024-11-06 15:14:15.136282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.136306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.150578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e6fa8 00:15:45.941 [2024-11-06 15:14:15.151013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.151076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.165521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e6b70 00:15:45.941 [2024-11-06 15:14:15.165992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.166024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.180513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e6738 00:15:45.941 [2024-11-06 15:14:15.180957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.181004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.195166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e6300 00:15:45.941 [2024-11-06 15:14:15.195751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.195782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.941 [2024-11-06 15:14:15.209782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e5ec8 00:15:45.941 [2024-11-06 15:14:15.210315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.941 [2024-11-06 15:14:15.210345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.224874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e5a90 00:15:46.200 [2024-11-06 15:14:15.225415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.225445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.239189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e5658 00:15:46.200 [2024-11-06 15:14:15.239814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.239846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.253680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e5220 00:15:46.200 [2024-11-06 15:14:15.254011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.254035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.268642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e4de8 00:15:46.200 [2024-11-06 15:14:15.269260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.269297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.285276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e49b0 00:15:46.200 [2024-11-06 15:14:15.285625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.285666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.301446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e4578 00:15:46.200 [2024-11-06 15:14:15.301816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.301841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.318146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e4140 00:15:46.200 [2024-11-06 15:14:15.318480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.318509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.334557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e3d08 00:15:46.200 [2024-11-06 15:14:15.334901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.334927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.349503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e38d0 00:15:46.200 [2024-11-06 15:14:15.349806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.349836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.363852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e3498 00:15:46.200 [2024-11-06 15:14:15.364095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.364120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.378163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e3060 00:15:46.200 [2024-11-06 15:14:15.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.200 [2024-11-06 15:14:15.378417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.200 [2024-11-06 15:14:15.392569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e2c28 00:15:46.200 [2024-11-06 15:14:15.392893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.201 [2024-11-06 15:14:15.392926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.201 [2024-11-06 15:14:15.406871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e27f0 00:15:46.201 [2024-11-06 15:14:15.407306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.201 [2024-11-06 15:14:15.407331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.201 [2024-11-06 15:14:15.421407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e23b8 00:15:46.201 [2024-11-06 15:14:15.421621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.201 [2024-11-06 15:14:15.421641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.201 [2024-11-06 15:14:15.435789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e1f80 00:15:46.201 [2024-11-06 15:14:15.435994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.201 [2024-11-06 15:14:15.436014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.201 [2024-11-06 15:14:15.450068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e1b48 00:15:46.201 [2024-11-06 15:14:15.450280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.201 [2024-11-06 15:14:15.450299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.201 [2024-11-06 15:14:15.464472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e1710 00:15:46.201 [2024-11-06 15:14:15.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.201 [2024-11-06 15:14:15.464728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.459 [2024-11-06 15:14:15.479804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e12d8 00:15:46.459 [2024-11-06 15:14:15.479982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.459 [2024-11-06 15:14:15.480001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.494192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e0ea0 00:15:46.460 [2024-11-06 15:14:15.494362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.494384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.508500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e0a68 00:15:46.460 [2024-11-06 15:14:15.508660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.508720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.522961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e0630 00:15:46.460 [2024-11-06 15:14:15.523310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.523332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.537592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190e01f8 00:15:46.460 [2024-11-06 15:14:15.537790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.537826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.551877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190dfdc0 00:15:46.460 [2024-11-06 15:14:15.552014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.552034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.565960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190df988 00:15:46.460 [2024-11-06 15:14:15.566092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.566112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.580802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190df550 00:15:46.460 [2024-11-06 15:14:15.580925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.580947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.596574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190df118 00:15:46.460 [2024-11-06 15:14:15.596778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.596799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.611299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190dece0 00:15:46.460 [2024-11-06 15:14:15.611568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.626707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190de8a8 00:15:46.460 [2024-11-06 15:14:15.626814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.626834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.641151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190de038 00:15:46.460 [2024-11-06 15:14:15.641391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.641412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.663748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190de038 00:15:46.460 [2024-11-06 15:14:15.665262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.665474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.679921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190de470 00:15:46.460 [2024-11-06 15:14:15.681523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.681779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.460 [2024-11-06 15:14:15.695504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477dc0) with pdu=0x2000190de8a8 00:15:46.460 [2024-11-06 15:14:15.697025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.460 [2024-11-06 15:14:15.697248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.460 00:15:46.460 Latency(us) 00:15:46.460 [2024-11-06T15:14:15.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.460 [2024-11-06T15:14:15.735Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.460 nvme0n1 : 2.00 16660.54 65.08 0.00 0.00 7676.59 6464.23 22639.71 00:15:46.460 [2024-11-06T15:14:15.735Z] =================================================================================================================== 00:15:46.460 [2024-11-06T15:14:15.735Z] Total : 16660.54 65.08 0.00 0.00 7676.59 6464.23 22639.71 00:15:46.460 0 00:15:46.460 15:14:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:46.460 15:14:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:46.460 15:14:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:46.460 | .driver_specific 00:15:46.460 | .nvme_error 00:15:46.460 | .status_code 00:15:46.460 | .command_transient_transport_error' 00:15:46.460 15:14:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:47.027 15:14:16 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:15:47.027 15:14:16 -- host/digest.sh@73 -- # killprocess 72086 00:15:47.027 15:14:16 -- common/autotest_common.sh@936 -- # '[' -z 72086 ']' 00:15:47.027 15:14:16 -- common/autotest_common.sh@940 -- # kill -0 72086 00:15:47.027 15:14:16 -- common/autotest_common.sh@941 -- # uname 00:15:47.027 15:14:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.027 15:14:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72086 00:15:47.027 killing process with pid 72086 00:15:47.027 Received shutdown signal, test time was about 2.000000 seconds 00:15:47.027 00:15:47.027 Latency(us) 00:15:47.027 [2024-11-06T15:14:16.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.027 [2024-11-06T15:14:16.302Z] =================================================================================================================== 00:15:47.027 [2024-11-06T15:14:16.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.027 15:14:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:47.027 15:14:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:47.027 15:14:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72086' 00:15:47.027 15:14:16 -- common/autotest_common.sh@955 -- # kill 72086 00:15:47.027 15:14:16 -- common/autotest_common.sh@960 -- # wait 72086 00:15:47.027 15:14:16 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:15:47.027 15:14:16 -- host/digest.sh@54 -- # local rw bs qd 00:15:47.027 15:14:16 -- host/digest.sh@56 -- # rw=randwrite 00:15:47.027 15:14:16 -- host/digest.sh@56 -- # bs=131072 00:15:47.027 15:14:16 -- host/digest.sh@56 -- # qd=16 00:15:47.027 15:14:16 -- host/digest.sh@58 -- # bperfpid=72145 00:15:47.027 15:14:16 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:15:47.027 15:14:16 -- host/digest.sh@60 -- # waitforlisten 72145 /var/tmp/bperf.sock 00:15:47.027 15:14:16 -- common/autotest_common.sh@829 -- # '[' -z 72145 ']' 00:15:47.027 15:14:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:47.027 15:14:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.027 15:14:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:47.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:47.027 15:14:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.027 15:14:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.027 [2024-11-06 15:14:16.278620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:47.027 [2024-11-06 15:14:16.278973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72145 ] 00:15:47.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:47.027 Zero copy mechanism will not be used. 00:15:47.286 [2024-11-06 15:14:16.406456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.286 [2024-11-06 15:14:16.458187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.222 15:14:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.222 15:14:17 -- common/autotest_common.sh@862 -- # return 0 00:15:48.222 15:14:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:48.222 15:14:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:48.481 15:14:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:48.481 15:14:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.481 15:14:17 -- common/autotest_common.sh@10 -- # set +x 00:15:48.481 15:14:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.481 15:14:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:48.481 15:14:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:48.741 nvme0n1 00:15:48.741 15:14:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:15:48.741 15:14:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.741 15:14:17 -- common/autotest_common.sh@10 -- # set +x 00:15:48.741 15:14:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.741 15:14:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:48.741 15:14:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:48.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:48.741 Zero copy mechanism will not be used. 00:15:48.741 Running I/O for 2 seconds... 00:15:48.741 [2024-11-06 15:14:17.961683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.961978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.962008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.966540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.966867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.966901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.971710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.972176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.972225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.976792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.977092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.977120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.981616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.981958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.986477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.986788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.986817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.991385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.991904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.991937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:17.996434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:17.996784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:17.996813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:18.001395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:18.001684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:18.001740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:18.006508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:18.006863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:18.006896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:48.741 [2024-11-06 15:14:18.011555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:48.741 [2024-11-06 15:14:18.012093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:48.741 [2024-11-06 15:14:18.012144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.001 [2024-11-06 15:14:18.017440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.001 [2024-11-06 15:14:18.017819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.001 [2024-11-06 15:14:18.017854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.001 [2024-11-06 15:14:18.022968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.001 [2024-11-06 15:14:18.023327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.023358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.028519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.028886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.028921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.033933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.034308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.034341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.039808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.040169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.040198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.045287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.045801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.045835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.050988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.051349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.051380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.056355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.056638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.056690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.061467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.061994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.062030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.066637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.066941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.066968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.071635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.072006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.072070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.076666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.076947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.076974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.081501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.082011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.082061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.086745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.087038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.087066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.091780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.092066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.092094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.096682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.096965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.096993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.101512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.102027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.102078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.107156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.107543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.107588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.112704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.112987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.113014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.117597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.118139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.118174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.122799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.123106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.123134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.127926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.128233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.128262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.133045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.133345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.133390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.138173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.138457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.138484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.143130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.143476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.143506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.148104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.148387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.148415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.153032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.153357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.157969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.002 [2024-11-06 15:14:18.158267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.002 [2024-11-06 15:14:18.158310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.002 [2024-11-06 15:14:18.162834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.163116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.163143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.167739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.168021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.168048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.172612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.172985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.177505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.178018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.178066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.182618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.182923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.182950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.187712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.188008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.188036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.192538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.192863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.192895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.197561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.198046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.198110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.202653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.203011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.203043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.207697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.207990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.208017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.212683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.213019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.213059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.218006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.218320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.218347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.223236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.223620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.223680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.228643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.228978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.229006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.234200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.234540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.234570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.239601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.239995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.240061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.244999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.245289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.245317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.250195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.250509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.250538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.255305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.255636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.255689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.260536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.260897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.260931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.265525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.266032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.266082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.003 [2024-11-06 15:14:18.270954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.003 [2024-11-06 15:14:18.271339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.003 [2024-11-06 15:14:18.271369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.276360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.276698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.276736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.281791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.282144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.282173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.286872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.287186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.287213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.291945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.292238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.292265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.296938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.297228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.297255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.302169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.302477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.302506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.307273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.307641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.307694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.312335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.312822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.312872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.317697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.318018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.318046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.322605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.322971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.327882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.328173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.328200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.332822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.333116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.333144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.264 [2024-11-06 15:14:18.337998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.264 [2024-11-06 15:14:18.338288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.264 [2024-11-06 15:14:18.338316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.342930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.343237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.343306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.348598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.349014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.349038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.354315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.354821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.354862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.360196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.360496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.360535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.365118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.365408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.365447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.370026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.370312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.370370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.374811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.375115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.375143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.379719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.380052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.380080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.384588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.384941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.384974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.389634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.390065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.390099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.394561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.394882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.394913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.399441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.399841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.404359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.404642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.404678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.409110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.409423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.414123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.414409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.414438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.419490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.419846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.419890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.424475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.424771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.424799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.429290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.429739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.429762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.434223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.434526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.434554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.439018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.439377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.439408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.443982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.444268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.444296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.448697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.448974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.449001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.453389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.453837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.453860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.458322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.458599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.458625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.265 [2024-11-06 15:14:18.463098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.265 [2024-11-06 15:14:18.463406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.265 [2024-11-06 15:14:18.463428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.467863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.468140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.468166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.472825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.473110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.473137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.477603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.477901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.477929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.482437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.482762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.482810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.487500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.487879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.487919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.492779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.493063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.493091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.498056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.498330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.498358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.503452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.503782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.503816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.508909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.509301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.509338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.514464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.514815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.514845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.519947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.520255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.525428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.525727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.530480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.530800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.530845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.266 [2024-11-06 15:14:18.536026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.266 [2024-11-06 15:14:18.536347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.266 [2024-11-06 15:14:18.536405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.526 [2024-11-06 15:14:18.541416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.526 [2024-11-06 15:14:18.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.526 [2024-11-06 15:14:18.541730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.526 [2024-11-06 15:14:18.546612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.526 [2024-11-06 15:14:18.546965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.526 [2024-11-06 15:14:18.547004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.526 [2024-11-06 15:14:18.551652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.526 [2024-11-06 15:14:18.552068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.526 [2024-11-06 15:14:18.552108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.526 [2024-11-06 15:14:18.556744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.526 [2024-11-06 15:14:18.557015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.557042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.561585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.561935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.561969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.566598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.566931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.566958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.571764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.572056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.572084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.576477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.576960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.577008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.581887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.582194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.582254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.587055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.587377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.587408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.592109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.592402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.592430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.597119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.597397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.597425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.602188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.602461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.602488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.607180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.607504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.607550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.612290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.612857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.612891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.617521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.617880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.617913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.622671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.622964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.623023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.627690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.628066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.628107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.632839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.633107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.633134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.637742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.638053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.638235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.642965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.643321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.643351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.648048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.648317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.648359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.652928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.653195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.653222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.657899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.658272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.658307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.663030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.663345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.663391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.668058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.668324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.668351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.673279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.673671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.527 [2024-11-06 15:14:18.673769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.527 [2024-11-06 15:14:18.679224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.527 [2024-11-06 15:14:18.679798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.679830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.684970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.685460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.690509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.691100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.691339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.696408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.696908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.697129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.702161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.702631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.702915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.708144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.708638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.708879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.714328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.714840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.715017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.720441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.720925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.721098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.726235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.726770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.726810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.731807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.732109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.732136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.736905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.737188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.737215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.742048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.742349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.742407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.747121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.747442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.747471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.752044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.752326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.752354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.757031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.757315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.761912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.762200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.762227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.766766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.767050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.767077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.771659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.772152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.772180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.776732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.777060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.781588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.781930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.781962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.786528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.786845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.786873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.791425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.791956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.791990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.528 [2024-11-06 15:14:18.796701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.528 [2024-11-06 15:14:18.797099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.528 [2024-11-06 15:14:18.797128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.788 [2024-11-06 15:14:18.802229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.788 [2024-11-06 15:14:18.802513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.788 [2024-11-06 15:14:18.802542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.788 [2024-11-06 15:14:18.807527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.788 [2024-11-06 15:14:18.808079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.788 [2024-11-06 15:14:18.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.788 [2024-11-06 15:14:18.812703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.788 [2024-11-06 15:14:18.813003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.788 [2024-11-06 15:14:18.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.788 [2024-11-06 15:14:18.817491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.788 [2024-11-06 15:14:18.817825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.788 [2024-11-06 15:14:18.817858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.788 [2024-11-06 15:14:18.822499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.822816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.822848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.827559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.828071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.828094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.832716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.832993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.833021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.837520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.837852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.837887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.842515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.842848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.842881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.847460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.847972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.848019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.852579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.852932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.852964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.857467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.857852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.857886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.862539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.862878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.862910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.867543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.868070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.868099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.872818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.873105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.873133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.877581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.877920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.877962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.882642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.882985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.883019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.887643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.888136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.888165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.892794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.893078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.893120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.897702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.898024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.898063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.902578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.902895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.902919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.907534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.908053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.908082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.912662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.912973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.913000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.917509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.917842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.922509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.922839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.922873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.927418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.927947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.927995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.933064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.933337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.933380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.938405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.938688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.938728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.789 [2024-11-06 15:14:18.943177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.789 [2024-11-06 15:14:18.943521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.789 [2024-11-06 15:14:18.943551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.948026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.948300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.948326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.952713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.952990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.953017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.957335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.957612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.957640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.962111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.962405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.962427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.967168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.967656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.967707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.972514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.972851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.972879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.977354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.977637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.977689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.982499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.982848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.982875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.987684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.988155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.988191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.992981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.993265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.993292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:18.998012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:18.998309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:18.998337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.002960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.003243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.003311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.008054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.008337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.008380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.012887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.013172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.013199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.017611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.017910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.017931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.022481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.022797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.022825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.027324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.027831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.032388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.032696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.032722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.037298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.037579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.037606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.042439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.042759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.047530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.048045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.048108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.053171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.053442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.053464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:49.790 [2024-11-06 15:14:19.058805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:49.790 [2024-11-06 15:14:19.059204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:49.790 [2024-11-06 15:14:19.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.050 [2024-11-06 15:14:19.064626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.065045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.065113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.070279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.070563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.070590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.075675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.076216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.076263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.081127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.081413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.081440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.086229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.086528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.086555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.091042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.091360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.091389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.096061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.096327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.096355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.100928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.101232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.101259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.105812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.106107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.110654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.110959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.111013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.115575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.116098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.120690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.121013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.121052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.125609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.125958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.125997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.130539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.130860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.130893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.135390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.135882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.135918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.140484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.140797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.140821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.145397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.145717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.145746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.150266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.150550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.150577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.155073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.155410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.155439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.159944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.160229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.164724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.165006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.165032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.169527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.169866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.169897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.174460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.174774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.174806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.179417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.179933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.179966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.184813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.185098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.185124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.190179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.190522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.190551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.195696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.196202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.196249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.201231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.201561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.201591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.206470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.206825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.206858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.211542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.212107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.212147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.216712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.217000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.217026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.221605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.221989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.222052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.226790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.227075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.227102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.231711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.232043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.236706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.051 [2024-11-06 15:14:19.236993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.051 [2024-11-06 15:14:19.237020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.051 [2024-11-06 15:14:19.241524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.241884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.241909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.246592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.246928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.246956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.251388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.251892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.251926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.256452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.256768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.261344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.261648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.261701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.266218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.266498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.266526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.271101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.271436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.271465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.276050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.276332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.276359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.281094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.281410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.281438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.286045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.286342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.286369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.290919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.291202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.291230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.295820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.296104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.296130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.300564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.300884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.300916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.305545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.305949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.305983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.310537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.310855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.310877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.315720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.316211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.316240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.052 [2024-11-06 15:14:19.320877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.052 [2024-11-06 15:14:19.321223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.052 [2024-11-06 15:14:19.321252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.312 [2024-11-06 15:14:19.326397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.312 [2024-11-06 15:14:19.326681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.312 [2024-11-06 15:14:19.326736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.312 [2024-11-06 15:14:19.331715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.312 [2024-11-06 15:14:19.332206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.312 [2024-11-06 15:14:19.332238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.312 [2024-11-06 15:14:19.336817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.312 [2024-11-06 15:14:19.337118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.312 [2024-11-06 15:14:19.337145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.312 [2024-11-06 15:14:19.341812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.312 [2024-11-06 15:14:19.342118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.342145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.346644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.346967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.347030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.351963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.352256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.352284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.357410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.357818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.362795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.363103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.363130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.368096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.368407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.368436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.373491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.373866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.373900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.378738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.379044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.379072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.383974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.384364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.389124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.389416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.389443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.394224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.394520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.394548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.399380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.399908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.399938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.404627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.404957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.404979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.409827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.410138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.410165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.414833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.415122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.415150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.419988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.420345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.420373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.425174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.425508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.425537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.430349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.430850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.430894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.435704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.436036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.436066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.440850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.441194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.441222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.446501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.446974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.447010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.452337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.452687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.452758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.457943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.458253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.458280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.463199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.463558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.463588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.468287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.468652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.468693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.473364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.473654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.313 [2024-11-06 15:14:19.473708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.313 [2024-11-06 15:14:19.478303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.313 [2024-11-06 15:14:19.478783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.478846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.483774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.484055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.484082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.488740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.489016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.489043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.494026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.494336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.494365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.499133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.499484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.499513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.504284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.504561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.504588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.509081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.509360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.509387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.514120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.514397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.514423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.519135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.519472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.519501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.524004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.524281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.524308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.528822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.529101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.529128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.533621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.533961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.533987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.538421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.538743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.538771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.543280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.543580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.543638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.548213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.548490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.548517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.553128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.553441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.557963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.558243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.558269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.562757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.563042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.563069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.567694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.567988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.568015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.572407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.572712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.572738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.577168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.577447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.577474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.314 [2024-11-06 15:14:19.582115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.314 [2024-11-06 15:14:19.582465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.314 [2024-11-06 15:14:19.582493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.587677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.587991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.588018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.592667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.593032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.593060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.597766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.598062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.598090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.602642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.602964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.602992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.607600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.607953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.607980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.612570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.612860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.612888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.617240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.617516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.617543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.622090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.622368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.622396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.626880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.627176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.627202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.631730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.632014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.632040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.636438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.636736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.636763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.641337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.641613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.641641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.646188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.646465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.646492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.651047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.651405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.651437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.656438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.656798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-11-06 15:14:19.656825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.573 [2024-11-06 15:14:19.661746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.573 [2024-11-06 15:14:19.662051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.662079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.666874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.667174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.667201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.672043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.672334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.672379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.677233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.677531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.677559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.682544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.682911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.682940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.687981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.688294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.688322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.693467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.693821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.693852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.698960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.699328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.699360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.704230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.704493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.704521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.709827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.710195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.710222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.715456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.715828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.715865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.720711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.720981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.721009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.725683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.726021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.726055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.730922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.731274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.731305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.736429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.736796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.736833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.742041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.742348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.742411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.747313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.747623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.747673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.752636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.753039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.753073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.758072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.758404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.758440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.763369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.763752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.763814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.768567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.768883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.768926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.773480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.773773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.773816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.778302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.778594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.778624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.783400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.783794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.783825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.788345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.788640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.788697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.793242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.793564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.798408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.798671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-11-06 15:14:19.798707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.574 [2024-11-06 15:14:19.803198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.574 [2024-11-06 15:14:19.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.808225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.808490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.808547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.813305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.813583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.813626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.818160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.818411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.818440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.823068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.823366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.828121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.828385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.828443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.832959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.833286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.833317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.837855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.838153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.838196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.575 [2024-11-06 15:14:19.842938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.575 [2024-11-06 15:14:19.843218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-11-06 15:14:19.843301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.833 [2024-11-06 15:14:19.848432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.848805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.853756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.854054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.854084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.858775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.859106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.859140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.864033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.864334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.864363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.868846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.869145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.869204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.873854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.874137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.874179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.878573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.878883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.883546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.883900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.883927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.888329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.888602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.888629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.893229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.893510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.893536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.898152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.898431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.898457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.902971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.903276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.903311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.908283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.908622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.908652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.913712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.914051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.914077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.919219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.919561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.919591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.924657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.925016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.925044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.930047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.930326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.930353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.935424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.935777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.935804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.940654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.941023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.941051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.946111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.946424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.946454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.951446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.951792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.951820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:50.834 [2024-11-06 15:14:19.956576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2477f60) with pdu=0x2000190fef90 00:15:50.834 [2024-11-06 15:14:19.956877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.834 [2024-11-06 15:14:19.956904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:50.834 00:15:50.834 Latency(us) 00:15:50.834 [2024-11-06T15:14:20.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.834 [2024-11-06T15:14:20.109Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:15:50.834 nvme0n1 : 2.00 6057.78 757.22 0.00 0.00 2635.38 1452.22 6136.55 00:15:50.834 [2024-11-06T15:14:20.109Z] =================================================================================================================== 00:15:50.834 [2024-11-06T15:14:20.109Z] Total : 6057.78 757.22 0.00 0.00 2635.38 1452.22 6136.55 00:15:50.834 0 00:15:50.834 15:14:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:50.834 15:14:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:50.834 15:14:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:50.834 | .driver_specific 00:15:50.834 | .nvme_error 00:15:50.834 | .status_code 00:15:50.834 | .command_transient_transport_error' 00:15:50.834 15:14:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:51.093 15:14:20 -- host/digest.sh@71 -- # (( 391 > 0 )) 00:15:51.093 15:14:20 -- host/digest.sh@73 -- # killprocess 72145 00:15:51.093 15:14:20 -- common/autotest_common.sh@936 -- # '[' -z 72145 ']' 00:15:51.093 15:14:20 -- common/autotest_common.sh@940 -- # kill -0 72145 00:15:51.093 15:14:20 -- common/autotest_common.sh@941 -- # uname 00:15:51.093 15:14:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.093 15:14:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72145 00:15:51.093 15:14:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:51.093 15:14:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:51.093 killing process with pid 72145 00:15:51.093 15:14:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72145' 00:15:51.093 15:14:20 -- common/autotest_common.sh@955 -- # kill 72145 00:15:51.093 Received shutdown signal, test time was about 2.000000 seconds 00:15:51.093 00:15:51.093 Latency(us) 00:15:51.093 [2024-11-06T15:14:20.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.093 [2024-11-06T15:14:20.368Z] =================================================================================================================== 00:15:51.093 [2024-11-06T15:14:20.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.093 15:14:20 -- common/autotest_common.sh@960 -- # wait 72145 00:15:51.353 15:14:20 -- host/digest.sh@115 -- # killprocess 71932 00:15:51.353 15:14:20 -- common/autotest_common.sh@936 -- # '[' -z 71932 ']' 00:15:51.353 15:14:20 -- common/autotest_common.sh@940 -- # kill -0 71932 00:15:51.353 15:14:20 -- common/autotest_common.sh@941 -- # uname 00:15:51.353 15:14:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.353 15:14:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71932 00:15:51.353 15:14:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:51.353 15:14:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:51.353 killing process with pid 71932 00:15:51.353 15:14:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71932' 00:15:51.353 15:14:20 -- common/autotest_common.sh@955 -- # kill 71932 00:15:51.353 15:14:20 -- common/autotest_common.sh@960 -- # wait 71932 00:15:51.612 00:15:51.612 real 0m18.304s 00:15:51.612 user 0m35.789s 00:15:51.612 sys 0m4.487s 00:15:51.612 15:14:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.612 15:14:20 -- common/autotest_common.sh@10 -- # set +x 00:15:51.612 ************************************ 00:15:51.612 END TEST nvmf_digest_error 00:15:51.612 ************************************ 00:15:51.612 15:14:20 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:15:51.612 15:14:20 -- host/digest.sh@139 -- # nvmftestfini 00:15:51.612 15:14:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:51.612 15:14:20 -- nvmf/common.sh@116 -- # sync 00:15:51.612 15:14:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:51.612 15:14:20 -- nvmf/common.sh@119 -- # set +e 00:15:51.612 15:14:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:51.612 15:14:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:51.612 rmmod nvme_tcp 00:15:51.612 rmmod nvme_fabrics 00:15:51.612 rmmod nvme_keyring 00:15:51.612 15:14:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:51.612 15:14:20 -- nvmf/common.sh@123 -- # set -e 00:15:51.612 15:14:20 -- nvmf/common.sh@124 -- # return 0 00:15:51.612 15:14:20 -- nvmf/common.sh@477 -- # '[' -n 71932 ']' 00:15:51.612 15:14:20 -- nvmf/common.sh@478 -- # killprocess 71932 00:15:51.612 15:14:20 -- common/autotest_common.sh@936 -- # '[' -z 71932 ']' 00:15:51.612 15:14:20 -- common/autotest_common.sh@940 -- # kill -0 71932 00:15:51.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71932) - No such process 00:15:51.612 Process with pid 71932 is not found 00:15:51.612 15:14:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71932 is not found' 00:15:51.612 15:14:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:51.612 15:14:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:51.612 15:14:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:51.612 15:14:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.612 15:14:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:51.612 15:14:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.612 15:14:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.612 15:14:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.612 15:14:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:51.612 00:15:51.612 real 0m35.307s 00:15:51.612 user 1m7.830s 00:15:51.612 sys 0m9.202s 00:15:51.612 15:14:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.612 15:14:20 -- common/autotest_common.sh@10 -- # set +x 00:15:51.612 ************************************ 00:15:51.612 END TEST nvmf_digest 00:15:51.612 ************************************ 00:15:51.612 15:14:20 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:15:51.612 15:14:20 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:15:51.612 15:14:20 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:15:51.612 15:14:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.612 15:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.612 15:14:20 -- common/autotest_common.sh@10 -- # set +x 00:15:51.872 ************************************ 00:15:51.872 START TEST nvmf_multipath 00:15:51.872 ************************************ 00:15:51.872 15:14:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:15:51.872 * Looking for test storage... 00:15:51.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:51.872 15:14:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.872 15:14:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.872 15:14:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.872 15:14:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.872 15:14:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.872 15:14:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.872 15:14:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.872 15:14:21 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.872 15:14:21 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.872 15:14:21 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.872 15:14:21 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.872 15:14:21 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.872 15:14:21 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.872 15:14:21 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.872 15:14:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.872 15:14:21 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.872 15:14:21 -- scripts/common.sh@344 -- # : 1 00:15:51.872 15:14:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.872 15:14:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.872 15:14:21 -- scripts/common.sh@364 -- # decimal 1 00:15:51.872 15:14:21 -- scripts/common.sh@352 -- # local d=1 00:15:51.872 15:14:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.872 15:14:21 -- scripts/common.sh@354 -- # echo 1 00:15:51.872 15:14:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.872 15:14:21 -- scripts/common.sh@365 -- # decimal 2 00:15:51.872 15:14:21 -- scripts/common.sh@352 -- # local d=2 00:15:51.872 15:14:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.872 15:14:21 -- scripts/common.sh@354 -- # echo 2 00:15:51.872 15:14:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.873 15:14:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.873 15:14:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.873 15:14:21 -- scripts/common.sh@367 -- # return 0 00:15:51.873 15:14:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.873 15:14:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.873 --rc genhtml_branch_coverage=1 00:15:51.873 --rc genhtml_function_coverage=1 00:15:51.873 --rc genhtml_legend=1 00:15:51.873 --rc geninfo_all_blocks=1 00:15:51.873 --rc geninfo_unexecuted_blocks=1 00:15:51.873 00:15:51.873 ' 00:15:51.873 15:14:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.873 --rc genhtml_branch_coverage=1 00:15:51.873 --rc genhtml_function_coverage=1 00:15:51.873 --rc genhtml_legend=1 00:15:51.873 --rc geninfo_all_blocks=1 00:15:51.873 --rc geninfo_unexecuted_blocks=1 00:15:51.873 00:15:51.873 ' 00:15:51.873 15:14:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.873 --rc genhtml_branch_coverage=1 00:15:51.873 --rc genhtml_function_coverage=1 00:15:51.873 --rc genhtml_legend=1 00:15:51.873 --rc geninfo_all_blocks=1 00:15:51.873 --rc geninfo_unexecuted_blocks=1 00:15:51.873 00:15:51.873 ' 00:15:51.873 15:14:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.873 --rc genhtml_branch_coverage=1 00:15:51.873 --rc genhtml_function_coverage=1 00:15:51.873 --rc genhtml_legend=1 00:15:51.873 --rc geninfo_all_blocks=1 00:15:51.873 --rc geninfo_unexecuted_blocks=1 00:15:51.873 00:15:51.873 ' 00:15:51.873 15:14:21 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.873 15:14:21 -- nvmf/common.sh@7 -- # uname -s 00:15:51.873 15:14:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.873 15:14:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.873 15:14:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.873 15:14:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.873 15:14:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.873 15:14:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.873 15:14:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.873 15:14:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.873 15:14:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.873 15:14:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.873 15:14:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:15:51.873 15:14:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:15:51.873 15:14:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.873 15:14:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.873 15:14:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.873 15:14:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.873 15:14:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.873 15:14:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.873 15:14:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.873 15:14:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.873 15:14:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.873 15:14:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.873 15:14:21 -- paths/export.sh@5 -- # export PATH 00:15:51.873 15:14:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.873 15:14:21 -- nvmf/common.sh@46 -- # : 0 00:15:51.873 15:14:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.873 15:14:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.873 15:14:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.873 15:14:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.873 15:14:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.873 15:14:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.873 15:14:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.873 15:14:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.873 15:14:21 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.873 15:14:21 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.873 15:14:21 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.873 15:14:21 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:51.873 15:14:21 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:51.873 15:14:21 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:51.873 15:14:21 -- host/multipath.sh@30 -- # nvmftestinit 00:15:51.873 15:14:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.873 15:14:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.873 15:14:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.873 15:14:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.873 15:14:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.873 15:14:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.873 15:14:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.873 15:14:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.873 15:14:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:51.873 15:14:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:51.873 15:14:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:51.873 15:14:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:51.873 15:14:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:51.873 15:14:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:51.873 15:14:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.873 15:14:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.873 15:14:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.873 15:14:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:51.873 15:14:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.873 15:14:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.873 15:14:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.873 15:14:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.873 15:14:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.873 15:14:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.873 15:14:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.873 15:14:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.873 15:14:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:51.873 15:14:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:51.873 Cannot find device "nvmf_tgt_br" 00:15:51.873 15:14:21 -- nvmf/common.sh@154 -- # true 00:15:51.873 15:14:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.132 Cannot find device "nvmf_tgt_br2" 00:15:52.132 15:14:21 -- nvmf/common.sh@155 -- # true 00:15:52.132 15:14:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:52.132 15:14:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:52.132 Cannot find device "nvmf_tgt_br" 00:15:52.132 15:14:21 -- nvmf/common.sh@157 -- # true 00:15:52.132 15:14:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:52.132 Cannot find device "nvmf_tgt_br2" 00:15:52.132 15:14:21 -- nvmf/common.sh@158 -- # true 00:15:52.132 15:14:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:52.132 15:14:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:52.132 15:14:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.132 15:14:21 -- nvmf/common.sh@161 -- # true 00:15:52.132 15:14:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.132 15:14:21 -- nvmf/common.sh@162 -- # true 00:15:52.132 15:14:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.132 15:14:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.132 15:14:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.132 15:14:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.132 15:14:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.132 15:14:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.132 15:14:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.132 15:14:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.132 15:14:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.132 15:14:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:52.132 15:14:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:52.132 15:14:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:52.132 15:14:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:52.132 15:14:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.132 15:14:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.132 15:14:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.132 15:14:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:52.132 15:14:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:52.132 15:14:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.132 15:14:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.391 15:14:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.391 15:14:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.391 15:14:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.391 15:14:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:52.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:52.391 00:15:52.391 --- 10.0.0.2 ping statistics --- 00:15:52.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.391 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:52.391 15:14:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:52.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:52.391 00:15:52.391 --- 10.0.0.3 ping statistics --- 00:15:52.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.391 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:52.391 15:14:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:52.391 00:15:52.391 --- 10.0.0.1 ping statistics --- 00:15:52.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.391 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:52.391 15:14:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.391 15:14:21 -- nvmf/common.sh@421 -- # return 0 00:15:52.391 15:14:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.391 15:14:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.391 15:14:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.391 15:14:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.391 15:14:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.391 15:14:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.391 15:14:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.391 15:14:21 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:15:52.391 15:14:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.391 15:14:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.391 15:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:52.391 15:14:21 -- nvmf/common.sh@469 -- # nvmfpid=72417 00:15:52.391 15:14:21 -- nvmf/common.sh@470 -- # waitforlisten 72417 00:15:52.391 15:14:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:52.391 15:14:21 -- common/autotest_common.sh@829 -- # '[' -z 72417 ']' 00:15:52.391 15:14:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.391 15:14:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.391 15:14:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.391 15:14:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.391 15:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:52.391 [2024-11-06 15:14:21.511341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:52.391 [2024-11-06 15:14:21.511416] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.391 [2024-11-06 15:14:21.652500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:52.650 [2024-11-06 15:14:21.722285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.650 [2024-11-06 15:14:21.722463] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.650 [2024-11-06 15:14:21.722478] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.650 [2024-11-06 15:14:21.722489] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.650 [2024-11-06 15:14:21.722706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.650 [2024-11-06 15:14:21.722714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.217 15:14:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.217 15:14:22 -- common/autotest_common.sh@862 -- # return 0 00:15:53.217 15:14:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.217 15:14:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.217 15:14:22 -- common/autotest_common.sh@10 -- # set +x 00:15:53.217 15:14:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.217 15:14:22 -- host/multipath.sh@33 -- # nvmfapp_pid=72417 00:15:53.217 15:14:22 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.476 [2024-11-06 15:14:22.737292] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.734 15:14:22 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:54.004 Malloc0 00:15:54.004 15:14:23 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:54.263 15:14:23 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.522 15:14:23 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.522 [2024-11-06 15:14:23.786610] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.781 15:14:23 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:54.781 [2024-11-06 15:14:24.054905] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:55.039 15:14:24 -- host/multipath.sh@44 -- # bdevperf_pid=72467 00:15:55.039 15:14:24 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:55.039 15:14:24 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.039 15:14:24 -- host/multipath.sh@47 -- # waitforlisten 72467 /var/tmp/bdevperf.sock 00:15:55.039 15:14:24 -- common/autotest_common.sh@829 -- # '[' -z 72467 ']' 00:15:55.039 15:14:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.039 15:14:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.039 15:14:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.039 15:14:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.039 15:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:55.975 15:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.975 15:14:25 -- common/autotest_common.sh@862 -- # return 0 00:15:55.975 15:14:25 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:56.235 15:14:25 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:56.494 Nvme0n1 00:15:56.494 15:14:25 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:56.752 Nvme0n1 00:15:57.012 15:14:26 -- host/multipath.sh@78 -- # sleep 1 00:15:57.012 15:14:26 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:57.948 15:14:27 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:15:57.948 15:14:27 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:58.207 15:14:27 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:58.466 15:14:27 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:15:58.466 15:14:27 -- host/multipath.sh@65 -- # dtrace_pid=72518 00:15:58.466 15:14:27 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:58.466 15:14:27 -- host/multipath.sh@66 -- # sleep 6 00:16:05.028 15:14:33 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:05.028 15:14:33 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:05.028 15:14:33 -- host/multipath.sh@67 -- # active_port=4421 00:16:05.028 15:14:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:05.028 Attaching 4 probes... 00:16:05.028 @path[10.0.0.2, 4421]: 19065 00:16:05.028 @path[10.0.0.2, 4421]: 19817 00:16:05.028 @path[10.0.0.2, 4421]: 19804 00:16:05.028 @path[10.0.0.2, 4421]: 19937 00:16:05.028 @path[10.0.0.2, 4421]: 20344 00:16:05.028 15:14:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:05.028 15:14:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:05.028 15:14:33 -- host/multipath.sh@69 -- # sed -n 1p 00:16:05.028 15:14:33 -- host/multipath.sh@69 -- # port=4421 00:16:05.028 15:14:33 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:05.028 15:14:33 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:05.028 15:14:33 -- host/multipath.sh@72 -- # kill 72518 00:16:05.028 15:14:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:05.028 15:14:33 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:05.028 15:14:33 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:05.028 15:14:34 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:05.286 15:14:34 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:05.286 15:14:34 -- host/multipath.sh@65 -- # dtrace_pid=72641 00:16:05.286 15:14:34 -- host/multipath.sh@66 -- # sleep 6 00:16:05.286 15:14:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:11.858 15:14:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:11.858 15:14:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:11.858 15:14:40 -- host/multipath.sh@67 -- # active_port=4420 00:16:11.858 15:14:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:11.858 Attaching 4 probes... 00:16:11.858 @path[10.0.0.2, 4420]: 19758 00:16:11.858 @path[10.0.0.2, 4420]: 19763 00:16:11.858 @path[10.0.0.2, 4420]: 19981 00:16:11.858 @path[10.0.0.2, 4420]: 19869 00:16:11.858 @path[10.0.0.2, 4420]: 19713 00:16:11.858 15:14:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:11.858 15:14:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:11.858 15:14:40 -- host/multipath.sh@69 -- # sed -n 1p 00:16:11.858 15:14:40 -- host/multipath.sh@69 -- # port=4420 00:16:11.858 15:14:40 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:11.858 15:14:40 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:11.858 15:14:40 -- host/multipath.sh@72 -- # kill 72641 00:16:11.858 15:14:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:11.858 15:14:40 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:11.858 15:14:40 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:11.858 15:14:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:12.117 15:14:41 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:12.117 15:14:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:12.117 15:14:41 -- host/multipath.sh@65 -- # dtrace_pid=72756 00:16:12.117 15:14:41 -- host/multipath.sh@66 -- # sleep 6 00:16:18.685 15:14:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:18.685 15:14:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:18.685 15:14:47 -- host/multipath.sh@67 -- # active_port=4421 00:16:18.685 15:14:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:18.685 Attaching 4 probes... 00:16:18.685 @path[10.0.0.2, 4421]: 14897 00:16:18.685 @path[10.0.0.2, 4421]: 19727 00:16:18.685 @path[10.0.0.2, 4421]: 19535 00:16:18.685 @path[10.0.0.2, 4421]: 19428 00:16:18.685 @path[10.0.0.2, 4421]: 19485 00:16:18.685 15:14:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:18.685 15:14:47 -- host/multipath.sh@69 -- # sed -n 1p 00:16:18.685 15:14:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:18.685 15:14:47 -- host/multipath.sh@69 -- # port=4421 00:16:18.685 15:14:47 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:18.685 15:14:47 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:18.685 15:14:47 -- host/multipath.sh@72 -- # kill 72756 00:16:18.685 15:14:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:18.685 15:14:47 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:18.685 15:14:47 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:18.685 15:14:47 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:18.944 15:14:48 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:18.944 15:14:48 -- host/multipath.sh@65 -- # dtrace_pid=72874 00:16:18.944 15:14:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:18.944 15:14:48 -- host/multipath.sh@66 -- # sleep 6 00:16:25.512 15:14:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:25.512 15:14:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:16:25.512 15:14:54 -- host/multipath.sh@67 -- # active_port= 00:16:25.512 15:14:54 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:25.512 Attaching 4 probes... 00:16:25.512 00:16:25.512 00:16:25.512 00:16:25.512 00:16:25.512 00:16:25.512 15:14:54 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:25.512 15:14:54 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:25.512 15:14:54 -- host/multipath.sh@69 -- # sed -n 1p 00:16:25.512 15:14:54 -- host/multipath.sh@69 -- # port= 00:16:25.512 15:14:54 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:16:25.512 15:14:54 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:16:25.512 15:14:54 -- host/multipath.sh@72 -- # kill 72874 00:16:25.512 15:14:54 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:25.512 15:14:54 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:16:25.512 15:14:54 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:25.512 15:14:54 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:25.771 15:14:54 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:16:25.771 15:14:54 -- host/multipath.sh@65 -- # dtrace_pid=72993 00:16:25.771 15:14:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:25.771 15:14:54 -- host/multipath.sh@66 -- # sleep 6 00:16:32.338 15:15:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:32.338 15:15:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:32.338 15:15:01 -- host/multipath.sh@67 -- # active_port=4421 00:16:32.338 15:15:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:32.338 Attaching 4 probes... 00:16:32.338 @path[10.0.0.2, 4421]: 18983 00:16:32.338 @path[10.0.0.2, 4421]: 19085 00:16:32.338 @path[10.0.0.2, 4421]: 18813 00:16:32.338 @path[10.0.0.2, 4421]: 19128 00:16:32.338 @path[10.0.0.2, 4421]: 18954 00:16:32.338 15:15:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:32.338 15:15:01 -- host/multipath.sh@69 -- # sed -n 1p 00:16:32.338 15:15:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:32.338 15:15:01 -- host/multipath.sh@69 -- # port=4421 00:16:32.338 15:15:01 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:32.338 15:15:01 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:32.338 15:15:01 -- host/multipath.sh@72 -- # kill 72993 00:16:32.338 15:15:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:32.338 15:15:01 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:32.338 [2024-11-06 15:15:01.454800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.454997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.338 [2024-11-06 15:15:01.455135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 [2024-11-06 15:15:01.455198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d76230 is same with the state(5) to be set 00:16:32.339 15:15:01 -- host/multipath.sh@101 -- # sleep 1 00:16:33.275 15:15:02 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:16:33.275 15:15:02 -- host/multipath.sh@65 -- # dtrace_pid=73111 00:16:33.275 15:15:02 -- host/multipath.sh@66 -- # sleep 6 00:16:33.275 15:15:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:39.842 15:15:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:39.842 15:15:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:39.842 15:15:08 -- host/multipath.sh@67 -- # active_port=4420 00:16:39.842 15:15:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:39.842 Attaching 4 probes... 00:16:39.842 @path[10.0.0.2, 4420]: 18464 00:16:39.843 @path[10.0.0.2, 4420]: 18934 00:16:39.843 @path[10.0.0.2, 4420]: 19158 00:16:39.843 @path[10.0.0.2, 4420]: 19016 00:16:39.843 @path[10.0.0.2, 4420]: 19151 00:16:39.843 15:15:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:39.843 15:15:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:39.843 15:15:08 -- host/multipath.sh@69 -- # sed -n 1p 00:16:39.843 15:15:08 -- host/multipath.sh@69 -- # port=4420 00:16:39.843 15:15:08 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:39.843 15:15:08 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:39.843 15:15:08 -- host/multipath.sh@72 -- # kill 73111 00:16:39.843 15:15:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:39.843 15:15:08 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:39.843 [2024-11-06 15:15:09.024112] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:39.843 15:15:09 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:40.101 15:15:09 -- host/multipath.sh@111 -- # sleep 6 00:16:46.665 15:15:15 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:16:46.665 15:15:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72417 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:46.665 15:15:15 -- host/multipath.sh@65 -- # dtrace_pid=73291 00:16:46.665 15:15:15 -- host/multipath.sh@66 -- # sleep 6 00:16:53.238 15:15:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:53.238 15:15:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:53.238 15:15:21 -- host/multipath.sh@67 -- # active_port=4421 00:16:53.238 15:15:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.238 Attaching 4 probes... 00:16:53.239 @path[10.0.0.2, 4421]: 19076 00:16:53.239 @path[10.0.0.2, 4421]: 18904 00:16:53.239 @path[10.0.0.2, 4421]: 19040 00:16:53.239 @path[10.0.0.2, 4421]: 18865 00:16:53.239 @path[10.0.0.2, 4421]: 18875 00:16:53.239 15:15:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:53.239 15:15:21 -- host/multipath.sh@69 -- # sed -n 1p 00:16:53.239 15:15:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:53.239 15:15:21 -- host/multipath.sh@69 -- # port=4421 00:16:53.239 15:15:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.239 15:15:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.239 15:15:21 -- host/multipath.sh@72 -- # kill 73291 00:16:53.239 15:15:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.239 15:15:21 -- host/multipath.sh@114 -- # killprocess 72467 00:16:53.239 15:15:21 -- common/autotest_common.sh@936 -- # '[' -z 72467 ']' 00:16:53.239 15:15:21 -- common/autotest_common.sh@940 -- # kill -0 72467 00:16:53.239 15:15:21 -- common/autotest_common.sh@941 -- # uname 00:16:53.239 15:15:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.239 15:15:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72467 00:16:53.239 killing process with pid 72467 00:16:53.239 15:15:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:53.239 15:15:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:53.239 15:15:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72467' 00:16:53.239 15:15:21 -- common/autotest_common.sh@955 -- # kill 72467 00:16:53.239 15:15:21 -- common/autotest_common.sh@960 -- # wait 72467 00:16:53.239 Connection closed with partial response: 00:16:53.239 00:16:53.239 00:16:53.239 15:15:21 -- host/multipath.sh@116 -- # wait 72467 00:16:53.239 15:15:21 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:53.239 [2024-11-06 15:14:24.117520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:53.239 [2024-11-06 15:14:24.117615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72467 ] 00:16:53.239 [2024-11-06 15:14:24.253801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.239 [2024-11-06 15:14:24.322589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.239 Running I/O for 90 seconds... 00:16:53.239 [2024-11-06 15:14:34.355508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.355576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.355722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.355766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.355804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.355842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.355879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.355916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.355953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.355975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.355990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.356027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.356098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.356173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.356442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.356980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.239 [2024-11-06 15:14:34.356995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.357017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.357032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.357068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.239 [2024-11-06 15:14:34.357083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:53.239 [2024-11-06 15:14:34.357104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.357888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.357972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.357994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.240 [2024-11-06 15:14:34.358656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.240 [2024-11-06 15:14:34.358699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:53.240 [2024-11-06 15:14:34.358734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.358776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.358813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.358851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.358896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.358934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.358971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.358986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.241 [2024-11-06 15:14:34.359778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.359975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.359997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.360064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.360081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.360102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.360117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.360140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.360155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.361998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.362033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.362092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.362110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.362147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.362167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.362183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.362203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.241 [2024-11-06 15:14:34.362218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:53.241 [2024-11-06 15:14:34.362239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:34.362626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:34.362773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:34.362790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.926835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.926903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.926975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.926996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.927651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.927979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.927992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.928012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.242 [2024-11-06 15:14:40.928026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:53.242 [2024-11-06 15:14:40.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.242 [2024-11-06 15:14:40.928072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.928958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.928980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.928994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.929044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.929076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.929109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.929141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.243 [2024-11-06 15:14:40.929174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.929207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.243 [2024-11-06 15:14:40.929239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:53.243 [2024-11-06 15:14:40.929259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.929510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.929544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.929577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.929899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.929966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.929985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.244 [2024-11-06 15:14:40.930514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.244 [2024-11-06 15:14:40.930627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:53.244 [2024-11-06 15:14:40.930647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.930710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.930772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.930806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.930841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.930875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.930909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.930923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.931802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.931830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.931864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.931881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.931909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.931924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.931950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.931993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:40.932548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:40.932576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:40.932591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.087700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.087782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.087817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.087849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.087882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.087914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.087946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.087978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.087996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.088009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.088028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.088041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.088060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.088073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.088091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.088105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.088123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.088136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.088166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.245 [2024-11-06 15:14:48.088181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:53.245 [2024-11-06 15:14:48.088199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.245 [2024-11-06 15:14:48.088213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.088246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.088408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.088444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.088893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.088927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.088961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.088981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.088995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.089076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.089394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.089427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.089641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.246 [2024-11-06 15:14:48.089675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.246 [2024-11-06 15:14:48.089753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:53.246 [2024-11-06 15:14:48.089772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.089787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.089825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.089840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.089860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.089874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.089895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.089909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.089929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.089943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.089964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.089978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.089999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.090946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.090966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.247 [2024-11-06 15:14:48.090980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.091008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.247 [2024-11-06 15:14:48.091025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.247 [2024-11-06 15:14:48.091046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.091095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.091128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.091389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.091404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.092299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.092406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.092487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.092611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.092791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.092946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.092976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.092991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.093032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.093072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.093113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.093153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.093194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.093235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.248 [2024-11-06 15:14:48.093275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.093331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.093373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:14:48.093400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.248 [2024-11-06 15:14:48.093421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:15:01.454966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.248 [2024-11-06 15:15:01.455019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:15:01.455037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.248 [2024-11-06 15:15:01.455051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.248 [2024-11-06 15:15:01.455079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.248 [2024-11-06 15:15:01.455092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.249 [2024-11-06 15:15:01.455117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0bb20 is same with the state(5) to be set 00:16:53.249 [2024-11-06 15:15:01.455250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.455974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.455989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.249 [2024-11-06 15:15:01.456466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.249 [2024-11-06 15:15:01.456510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.249 [2024-11-06 15:15:01.456523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.456552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.456698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.456821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.456984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.456996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.250 [2024-11-06 15:15:01.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.250 [2024-11-06 15:15:01.457731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.250 [2024-11-06 15:15:01.457745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.457971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.457986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.457999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.251 [2024-11-06 15:15:01.458903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.251 [2024-11-06 15:15:01.458930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.251 [2024-11-06 15:15:01.458945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.252 [2024-11-06 15:15:01.458960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.458975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.458987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.252 [2024-11-06 15:15:01.459184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec50 is same with the state(5) to be set 00:16:53.252 [2024-11-06 15:15:01.459213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.252 [2024-11-06 15:15:01.459223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.252 [2024-11-06 15:15:01.459234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:16:53.252 [2024-11-06 15:15:01.459246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.252 [2024-11-06 15:15:01.459318] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe2ec50 was disconnected and freed. reset controller. 00:16:53.252 [2024-11-06 15:15:01.460456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:53.252 [2024-11-06 15:15:01.460497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0bb20 (9): Bad file descriptor 00:16:53.252 [2024-11-06 15:15:01.460821] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:53.252 [2024-11-06 15:15:01.460898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:53.252 [2024-11-06 15:15:01.460950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:53.252 [2024-11-06 15:15:01.460972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0bb20 with addr=10.0.0.2, port=4421 00:16:53.252 [2024-11-06 15:15:01.460987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0bb20 is same with the state(5) to be set 00:16:53.252 [2024-11-06 15:15:01.461019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0bb20 (9): Bad file descriptor 00:16:53.252 [2024-11-06 15:15:01.461051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.252 [2024-11-06 15:15:01.461071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:53.252 [2024-11-06 15:15:01.461096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.252 [2024-11-06 15:15:01.461130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:53.252 [2024-11-06 15:15:01.461148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:53.252 [2024-11-06 15:15:11.508840] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:53.252 Received shutdown signal, test time was about 55.525488 seconds 00:16:53.252 00:16:53.252 Latency(us) 00:16:53.252 [2024-11-06T15:15:22.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.252 [2024-11-06T15:15:22.527Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:53.252 Verification LBA range: start 0x0 length 0x4000 00:16:53.252 Nvme0n1 : 55.52 11029.36 43.08 0.00 0.00 11586.11 223.42 7015926.69 00:16:53.252 [2024-11-06T15:15:22.527Z] =================================================================================================================== 00:16:53.252 [2024-11-06T15:15:22.527Z] Total : 11029.36 43.08 0.00 0.00 11586.11 223.42 7015926.69 00:16:53.252 15:15:21 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.252 15:15:22 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:16:53.252 15:15:22 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:53.252 15:15:22 -- host/multipath.sh@125 -- # nvmftestfini 00:16:53.252 15:15:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:53.252 15:15:22 -- nvmf/common.sh@116 -- # sync 00:16:53.252 15:15:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:53.252 15:15:22 -- nvmf/common.sh@119 -- # set +e 00:16:53.252 15:15:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:53.252 15:15:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:53.252 rmmod nvme_tcp 00:16:53.252 rmmod nvme_fabrics 00:16:53.252 rmmod nvme_keyring 00:16:53.252 15:15:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:53.252 15:15:22 -- nvmf/common.sh@123 -- # set -e 00:16:53.252 15:15:22 -- nvmf/common.sh@124 -- # return 0 00:16:53.252 15:15:22 -- nvmf/common.sh@477 -- # '[' -n 72417 ']' 00:16:53.252 15:15:22 -- nvmf/common.sh@478 -- # killprocess 72417 00:16:53.252 15:15:22 -- common/autotest_common.sh@936 -- # '[' -z 72417 ']' 00:16:53.252 15:15:22 -- common/autotest_common.sh@940 -- # kill -0 72417 00:16:53.252 15:15:22 -- common/autotest_common.sh@941 -- # uname 00:16:53.252 15:15:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.252 15:15:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72417 00:16:53.252 15:15:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.252 15:15:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.252 killing process with pid 72417 00:16:53.252 15:15:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72417' 00:16:53.252 15:15:22 -- common/autotest_common.sh@955 -- # kill 72417 00:16:53.252 15:15:22 -- common/autotest_common.sh@960 -- # wait 72417 00:16:53.252 15:15:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:53.252 15:15:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:53.252 15:15:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:53.252 15:15:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.252 15:15:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:53.252 15:15:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.252 15:15:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.252 15:15:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.512 15:15:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:53.512 00:16:53.512 real 1m1.622s 00:16:53.512 user 2m51.070s 00:16:53.512 sys 0m18.103s 00:16:53.512 15:15:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:53.512 ************************************ 00:16:53.512 END TEST nvmf_multipath 00:16:53.512 15:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.512 ************************************ 00:16:53.512 15:15:22 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:16:53.512 15:15:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:53.512 15:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.512 15:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.512 ************************************ 00:16:53.512 START TEST nvmf_timeout 00:16:53.512 ************************************ 00:16:53.512 15:15:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:16:53.512 * Looking for test storage... 00:16:53.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:53.512 15:15:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:53.512 15:15:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:53.512 15:15:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:53.512 15:15:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:53.512 15:15:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:53.512 15:15:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:53.512 15:15:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:53.512 15:15:22 -- scripts/common.sh@335 -- # IFS=.-: 00:16:53.512 15:15:22 -- scripts/common.sh@335 -- # read -ra ver1 00:16:53.512 15:15:22 -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.512 15:15:22 -- scripts/common.sh@336 -- # read -ra ver2 00:16:53.512 15:15:22 -- scripts/common.sh@337 -- # local 'op=<' 00:16:53.512 15:15:22 -- scripts/common.sh@339 -- # ver1_l=2 00:16:53.512 15:15:22 -- scripts/common.sh@340 -- # ver2_l=1 00:16:53.512 15:15:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:53.512 15:15:22 -- scripts/common.sh@343 -- # case "$op" in 00:16:53.512 15:15:22 -- scripts/common.sh@344 -- # : 1 00:16:53.512 15:15:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:53.512 15:15:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.512 15:15:22 -- scripts/common.sh@364 -- # decimal 1 00:16:53.512 15:15:22 -- scripts/common.sh@352 -- # local d=1 00:16:53.512 15:15:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.512 15:15:22 -- scripts/common.sh@354 -- # echo 1 00:16:53.512 15:15:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:53.513 15:15:22 -- scripts/common.sh@365 -- # decimal 2 00:16:53.513 15:15:22 -- scripts/common.sh@352 -- # local d=2 00:16:53.513 15:15:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.513 15:15:22 -- scripts/common.sh@354 -- # echo 2 00:16:53.513 15:15:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:53.513 15:15:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:53.513 15:15:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:53.513 15:15:22 -- scripts/common.sh@367 -- # return 0 00:16:53.513 15:15:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.513 15:15:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:53.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.513 --rc genhtml_branch_coverage=1 00:16:53.513 --rc genhtml_function_coverage=1 00:16:53.513 --rc genhtml_legend=1 00:16:53.513 --rc geninfo_all_blocks=1 00:16:53.513 --rc geninfo_unexecuted_blocks=1 00:16:53.513 00:16:53.513 ' 00:16:53.513 15:15:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:53.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.513 --rc genhtml_branch_coverage=1 00:16:53.513 --rc genhtml_function_coverage=1 00:16:53.513 --rc genhtml_legend=1 00:16:53.513 --rc geninfo_all_blocks=1 00:16:53.513 --rc geninfo_unexecuted_blocks=1 00:16:53.513 00:16:53.513 ' 00:16:53.513 15:15:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:53.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.513 --rc genhtml_branch_coverage=1 00:16:53.513 --rc genhtml_function_coverage=1 00:16:53.513 --rc genhtml_legend=1 00:16:53.513 --rc geninfo_all_blocks=1 00:16:53.513 --rc geninfo_unexecuted_blocks=1 00:16:53.513 00:16:53.513 ' 00:16:53.513 15:15:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:53.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.513 --rc genhtml_branch_coverage=1 00:16:53.513 --rc genhtml_function_coverage=1 00:16:53.513 --rc genhtml_legend=1 00:16:53.513 --rc geninfo_all_blocks=1 00:16:53.513 --rc geninfo_unexecuted_blocks=1 00:16:53.513 00:16:53.513 ' 00:16:53.513 15:15:22 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.513 15:15:22 -- nvmf/common.sh@7 -- # uname -s 00:16:53.513 15:15:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.513 15:15:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.513 15:15:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.513 15:15:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.513 15:15:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.513 15:15:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.513 15:15:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.513 15:15:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.513 15:15:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.513 15:15:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.513 15:15:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:16:53.513 15:15:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:16:53.513 15:15:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.513 15:15:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.513 15:15:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.513 15:15:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.513 15:15:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.513 15:15:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.513 15:15:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.513 15:15:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.513 15:15:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.513 15:15:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.513 15:15:22 -- paths/export.sh@5 -- # export PATH 00:16:53.513 15:15:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.513 15:15:22 -- nvmf/common.sh@46 -- # : 0 00:16:53.513 15:15:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.513 15:15:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.513 15:15:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.513 15:15:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.513 15:15:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.513 15:15:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.513 15:15:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.513 15:15:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.513 15:15:22 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.513 15:15:22 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.513 15:15:22 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.513 15:15:22 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:53.513 15:15:22 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.513 15:15:22 -- host/timeout.sh@19 -- # nvmftestinit 00:16:53.513 15:15:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:53.513 15:15:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.513 15:15:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.513 15:15:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.513 15:15:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.513 15:15:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.513 15:15:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.513 15:15:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.513 15:15:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:53.513 15:15:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:53.513 15:15:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:53.513 15:15:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:53.513 15:15:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:53.513 15:15:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:53.513 15:15:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.513 15:15:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.513 15:15:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:53.513 15:15:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:53.513 15:15:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:53.513 15:15:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:53.513 15:15:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:53.513 15:15:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.513 15:15:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:53.513 15:15:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:53.513 15:15:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:53.513 15:15:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:53.513 15:15:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:53.513 15:15:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:53.772 Cannot find device "nvmf_tgt_br" 00:16:53.772 15:15:22 -- nvmf/common.sh@154 -- # true 00:16:53.772 15:15:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.772 Cannot find device "nvmf_tgt_br2" 00:16:53.772 15:15:22 -- nvmf/common.sh@155 -- # true 00:16:53.772 15:15:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:53.772 15:15:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:53.772 Cannot find device "nvmf_tgt_br" 00:16:53.772 15:15:22 -- nvmf/common.sh@157 -- # true 00:16:53.772 15:15:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:53.772 Cannot find device "nvmf_tgt_br2" 00:16:53.772 15:15:22 -- nvmf/common.sh@158 -- # true 00:16:53.772 15:15:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:53.772 15:15:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:53.772 15:15:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.772 15:15:22 -- nvmf/common.sh@161 -- # true 00:16:53.772 15:15:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.772 15:15:22 -- nvmf/common.sh@162 -- # true 00:16:53.772 15:15:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.772 15:15:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.772 15:15:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.772 15:15:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.772 15:15:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.772 15:15:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.772 15:15:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.772 15:15:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.772 15:15:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:53.773 15:15:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:53.773 15:15:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:53.773 15:15:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:53.773 15:15:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:53.773 15:15:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.773 15:15:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.773 15:15:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.773 15:15:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:53.773 15:15:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:53.773 15:15:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.773 15:15:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.773 15:15:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.031 15:15:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.031 15:15:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.031 15:15:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:54.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:54.032 00:16:54.032 --- 10.0.0.2 ping statistics --- 00:16:54.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.032 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:54.032 15:15:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:54.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:16:54.032 00:16:54.032 --- 10.0.0.3 ping statistics --- 00:16:54.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.032 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:54.032 15:15:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:54.032 00:16:54.032 --- 10.0.0.1 ping statistics --- 00:16:54.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.032 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:54.032 15:15:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.032 15:15:23 -- nvmf/common.sh@421 -- # return 0 00:16:54.032 15:15:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:54.032 15:15:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.032 15:15:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:54.032 15:15:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:54.032 15:15:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.032 15:15:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:54.032 15:15:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:54.032 15:15:23 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:16:54.032 15:15:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:54.032 15:15:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.032 15:15:23 -- common/autotest_common.sh@10 -- # set +x 00:16:54.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.032 15:15:23 -- nvmf/common.sh@469 -- # nvmfpid=73605 00:16:54.032 15:15:23 -- nvmf/common.sh@470 -- # waitforlisten 73605 00:16:54.032 15:15:23 -- common/autotest_common.sh@829 -- # '[' -z 73605 ']' 00:16:54.032 15:15:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:54.032 15:15:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.032 15:15:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.032 15:15:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.032 15:15:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.032 15:15:23 -- common/autotest_common.sh@10 -- # set +x 00:16:54.032 [2024-11-06 15:15:23.147191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:54.032 [2024-11-06 15:15:23.147325] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.032 [2024-11-06 15:15:23.280146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.291 [2024-11-06 15:15:23.334440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.291 [2024-11-06 15:15:23.334608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.291 [2024-11-06 15:15:23.334621] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.291 [2024-11-06 15:15:23.334630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.291 [2024-11-06 15:15:23.334800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.291 [2024-11-06 15:15:23.334810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.229 15:15:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.229 15:15:24 -- common/autotest_common.sh@862 -- # return 0 00:16:55.229 15:15:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:55.229 15:15:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.229 15:15:24 -- common/autotest_common.sh@10 -- # set +x 00:16:55.229 15:15:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.229 15:15:24 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.229 15:15:24 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:55.229 [2024-11-06 15:15:24.464774] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.229 15:15:24 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:55.797 Malloc0 00:16:55.797 15:15:24 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:55.797 15:15:25 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:56.056 15:15:25 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.315 [2024-11-06 15:15:25.456116] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.315 15:15:25 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:16:56.315 15:15:25 -- host/timeout.sh@32 -- # bdevperf_pid=73660 00:16:56.315 15:15:25 -- host/timeout.sh@34 -- # waitforlisten 73660 /var/tmp/bdevperf.sock 00:16:56.316 15:15:25 -- common/autotest_common.sh@829 -- # '[' -z 73660 ']' 00:16:56.316 15:15:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.316 15:15:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.316 15:15:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.316 15:15:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.316 15:15:25 -- common/autotest_common.sh@10 -- # set +x 00:16:56.316 [2024-11-06 15:15:25.511417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:56.316 [2024-11-06 15:15:25.511510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73660 ] 00:16:56.574 [2024-11-06 15:15:25.647814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.574 [2024-11-06 15:15:25.716945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.509 15:15:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.509 15:15:26 -- common/autotest_common.sh@862 -- # return 0 00:16:57.509 15:15:26 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:57.509 15:15:26 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:16:57.768 NVMe0n1 00:16:57.768 15:15:26 -- host/timeout.sh@51 -- # rpc_pid=73678 00:16:57.768 15:15:26 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.768 15:15:26 -- host/timeout.sh@53 -- # sleep 1 00:16:58.026 Running I/O for 10 seconds... 00:16:58.965 15:15:27 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.965 [2024-11-06 15:15:28.174748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.174910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c480 is same with the state(5) to be set 00:16:58.965 [2024-11-06 15:15:28.175248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.175554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.175752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.175825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.965 [2024-11-06 15:15:28.176749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.965 [2024-11-06 15:15:28.176839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.965 [2024-11-06 15:15:28.176849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.176987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.176997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.966 [2024-11-06 15:15:28.177555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.966 [2024-11-06 15:15:28.177645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.966 [2024-11-06 15:15:28.177655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.177937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.177984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.177992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.967 [2024-11-06 15:15:28.178388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.967 [2024-11-06 15:15:28.178398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.967 [2024-11-06 15:15:28.178406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.968 [2024-11-06 15:15:28.178444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.968 [2024-11-06 15:15:28.178520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:58.968 [2024-11-06 15:15:28.178575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.178640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.178648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.968 [2024-11-06 15:15:28.179110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.179475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd210c0 is same with the state(5) to be set 00:16:58.968 [2024-11-06 15:15:28.179631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:58.968 [2024-11-06 15:15:28.179664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:58.968 [2024-11-06 15:15:28.179712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121584 len:8 PRP1 0x0 PRP2 0x0 00:16:58.968 [2024-11-06 15:15:28.179767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.179962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:58.968 [2024-11-06 15:15:28.179999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:58.968 [2024-11-06 15:15:28.180030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121600 len:8 PRP1 0x0 PRP2 0x0 00:16:58.968 [2024-11-06 15:15:28.180159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.180217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:58.968 [2024-11-06 15:15:28.180314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:58.968 [2024-11-06 15:15:28.180399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121616 len:8 PRP1 0x0 PRP2 0x0 00:16:58.968 [2024-11-06 15:15:28.180461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.180574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:58.968 [2024-11-06 15:15:28.180616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:58.968 [2024-11-06 15:15:28.180647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121632 len:8 PRP1 0x0 PRP2 0x0 00:16:58.968 [2024-11-06 15:15:28.180777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.180869] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd210c0 was disconnected and freed. reset controller. 00:16:58.968 [2024-11-06 15:15:28.181054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.968 [2024-11-06 15:15:28.181195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.181269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.968 [2024-11-06 15:15:28.181430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.181599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.968 [2024-11-06 15:15:28.181772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.181880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.968 [2024-11-06 15:15:28.181895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.968 [2024-11-06 15:15:28.181904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe010 is same with the state(5) to be set 00:16:58.968 [2024-11-06 15:15:28.182140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:58.968 [2024-11-06 15:15:28.182165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbe010 (9): Bad file descriptor 00:16:58.968 [2024-11-06 15:15:28.182262] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:58.968 [2024-11-06 15:15:28.182351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:58.968 [2024-11-06 15:15:28.182393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:58.968 [2024-11-06 15:15:28.182409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbe010 with addr=10.0.0.2, port=4420 00:16:58.968 [2024-11-06 15:15:28.182419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe010 is same with the state(5) to be set 00:16:58.968 [2024-11-06 15:15:28.182438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbe010 (9): Bad file descriptor 00:16:58.968 [2024-11-06 15:15:28.182454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:58.968 [2024-11-06 15:15:28.182463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:58.968 [2024-11-06 15:15:28.182473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:58.968 [2024-11-06 15:15:28.182492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:58.968 [2024-11-06 15:15:28.182503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:58.968 15:15:28 -- host/timeout.sh@56 -- # sleep 2 00:17:01.500 [2024-11-06 15:15:30.182749] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.500 [2024-11-06 15:15:30.183083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.500 [2024-11-06 15:15:30.183181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.500 [2024-11-06 15:15:30.183327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbe010 with addr=10.0.0.2, port=4420 00:17:01.500 [2024-11-06 15:15:30.183474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe010 is same with the state(5) to be set 00:17:01.500 [2024-11-06 15:15:30.183632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbe010 (9): Bad file descriptor 00:17:01.500 [2024-11-06 15:15:30.183852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:01.500 [2024-11-06 15:15:30.183991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:01.500 [2024-11-06 15:15:30.184126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:01.500 [2024-11-06 15:15:30.184252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.500 [2024-11-06 15:15:30.184301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:01.500 15:15:30 -- host/timeout.sh@57 -- # get_controller 00:17:01.500 15:15:30 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:01.500 15:15:30 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:01.500 15:15:30 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:01.500 15:15:30 -- host/timeout.sh@58 -- # get_bdev 00:17:01.500 15:15:30 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:01.500 15:15:30 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:01.500 15:15:30 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:01.500 15:15:30 -- host/timeout.sh@61 -- # sleep 5 00:17:03.402 [2024-11-06 15:15:32.184637] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.402 [2024-11-06 15:15:32.184947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.402 [2024-11-06 15:15:32.185045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.402 [2024-11-06 15:15:32.185155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbe010 with addr=10.0.0.2, port=4420 00:17:03.402 [2024-11-06 15:15:32.185174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe010 is same with the state(5) to be set 00:17:03.402 [2024-11-06 15:15:32.185205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbe010 (9): Bad file descriptor 00:17:03.402 [2024-11-06 15:15:32.185224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:03.402 [2024-11-06 15:15:32.185234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:03.402 [2024-11-06 15:15:32.185244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.402 [2024-11-06 15:15:32.185271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.402 [2024-11-06 15:15:32.185282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:05.302 [2024-11-06 15:15:34.185312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:05.302 [2024-11-06 15:15:34.185407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:05.302 [2024-11-06 15:15:34.185435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:05.302 [2024-11-06 15:15:34.185445] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:05.302 [2024-11-06 15:15:34.185474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:06.238 00:17:06.238 Latency(us) 00:17:06.238 [2024-11-06T15:15:35.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.238 [2024-11-06T15:15:35.513Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:06.238 Verification LBA range: start 0x0 length 0x4000 00:17:06.238 NVMe0n1 : 8.11 1869.11 7.30 15.79 0.00 67788.98 3023.59 7015926.69 00:17:06.238 [2024-11-06T15:15:35.513Z] =================================================================================================================== 00:17:06.238 [2024-11-06T15:15:35.513Z] Total : 1869.11 7.30 15.79 0.00 67788.98 3023.59 7015926.69 00:17:06.238 0 00:17:06.496 15:15:35 -- host/timeout.sh@62 -- # get_controller 00:17:06.496 15:15:35 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:06.496 15:15:35 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:06.754 15:15:36 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:06.754 15:15:36 -- host/timeout.sh@63 -- # get_bdev 00:17:06.754 15:15:36 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:06.754 15:15:36 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:07.322 15:15:36 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:07.322 15:15:36 -- host/timeout.sh@65 -- # wait 73678 00:17:07.322 15:15:36 -- host/timeout.sh@67 -- # killprocess 73660 00:17:07.322 15:15:36 -- common/autotest_common.sh@936 -- # '[' -z 73660 ']' 00:17:07.322 15:15:36 -- common/autotest_common.sh@940 -- # kill -0 73660 00:17:07.322 15:15:36 -- common/autotest_common.sh@941 -- # uname 00:17:07.322 15:15:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.322 15:15:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73660 00:17:07.322 killing process with pid 73660 00:17:07.322 Received shutdown signal, test time was about 9.278917 seconds 00:17:07.322 00:17:07.322 Latency(us) 00:17:07.322 [2024-11-06T15:15:36.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.322 [2024-11-06T15:15:36.597Z] =================================================================================================================== 00:17:07.322 [2024-11-06T15:15:36.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.322 15:15:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:07.322 15:15:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:07.322 15:15:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73660' 00:17:07.322 15:15:36 -- common/autotest_common.sh@955 -- # kill 73660 00:17:07.322 15:15:36 -- common/autotest_common.sh@960 -- # wait 73660 00:17:07.322 15:15:36 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.581 [2024-11-06 15:15:36.756439] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.581 15:15:36 -- host/timeout.sh@74 -- # bdevperf_pid=73805 00:17:07.581 15:15:36 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:07.581 15:15:36 -- host/timeout.sh@76 -- # waitforlisten 73805 /var/tmp/bdevperf.sock 00:17:07.581 15:15:36 -- common/autotest_common.sh@829 -- # '[' -z 73805 ']' 00:17:07.581 15:15:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.581 15:15:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.581 15:15:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.581 15:15:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.581 15:15:36 -- common/autotest_common.sh@10 -- # set +x 00:17:07.581 [2024-11-06 15:15:36.821989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.581 [2024-11-06 15:15:36.822252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73805 ] 00:17:07.840 [2024-11-06 15:15:36.954045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.840 [2024-11-06 15:15:37.008786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.776 15:15:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.776 15:15:37 -- common/autotest_common.sh@862 -- # return 0 00:17:08.776 15:15:37 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:08.776 15:15:38 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:09.035 NVMe0n1 00:17:09.035 15:15:38 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.035 15:15:38 -- host/timeout.sh@84 -- # rpc_pid=73824 00:17:09.035 15:15:38 -- host/timeout.sh@86 -- # sleep 1 00:17:09.293 Running I/O for 10 seconds... 00:17:10.230 15:15:39 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.491 [2024-11-06 15:15:39.565654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.565981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.491 [2024-11-06 15:15:39.566151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec7b0 is same with the state(5) to be set 00:17:10.492 [2024-11-06 15:15:39.566459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.566978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.566989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.492 [2024-11-06 15:15:39.566998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.567009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.567018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.567037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.567048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.492 [2024-11-06 15:15:39.567058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.567069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.567078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.492 [2024-11-06 15:15:39.567089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.492 [2024-11-06 15:15:39.567098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.493 [2024-11-06 15:15:39.567905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.493 [2024-11-06 15:15:39.567979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.493 [2024-11-06 15:15:39.567989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.494 [2024-11-06 15:15:39.568766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.494 [2024-11-06 15:15:39.568858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.494 [2024-11-06 15:15:39.568867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.568878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.568889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.568900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.568909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.568920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.568929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.568940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.568950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.568960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.568969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.568983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.568992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.495 [2024-11-06 15:15:39.569252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.495 [2024-11-06 15:15:39.569272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f30c0 is same with the state(5) to be set 00:17:10.495 [2024-11-06 15:15:39.569293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:10.495 [2024-11-06 15:15:39.569303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:10.495 [2024-11-06 15:15:39.569311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127144 len:8 PRP1 0x0 PRP2 0x0 00:17:10.495 [2024-11-06 15:15:39.569320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569362] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20f30c0 was disconnected and freed. reset controller. 00:17:10.495 [2024-11-06 15:15:39.569467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.495 [2024-11-06 15:15:39.569485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.495 [2024-11-06 15:15:39.569505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.495 [2024-11-06 15:15:39.569524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.495 [2024-11-06 15:15:39.569544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.495 [2024-11-06 15:15:39.569553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:10.495 [2024-11-06 15:15:39.569788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.495 [2024-11-06 15:15:39.569812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:10.495 [2024-11-06 15:15:39.569911] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:10.495 [2024-11-06 15:15:39.569976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:10.495 [2024-11-06 15:15:39.570022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:10.495 [2024-11-06 15:15:39.570039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090010 with addr=10.0.0.2, port=4420 00:17:10.495 [2024-11-06 15:15:39.570050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:10.495 [2024-11-06 15:15:39.570069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:10.495 [2024-11-06 15:15:39.570098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:10.495 [2024-11-06 15:15:39.570111] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:10.495 [2024-11-06 15:15:39.580828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.495 [2024-11-06 15:15:39.580894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:10.495 [2024-11-06 15:15:39.580915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.495 15:15:39 -- host/timeout.sh@90 -- # sleep 1 00:17:11.431 [2024-11-06 15:15:40.581049] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:11.431 [2024-11-06 15:15:40.581400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:11.431 [2024-11-06 15:15:40.581459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:11.431 [2024-11-06 15:15:40.581478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090010 with addr=10.0.0.2, port=4420 00:17:11.431 [2024-11-06 15:15:40.581492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:11.431 [2024-11-06 15:15:40.581524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:11.431 [2024-11-06 15:15:40.581560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.431 [2024-11-06 15:15:40.581572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:11.431 [2024-11-06 15:15:40.581583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.431 [2024-11-06 15:15:40.581612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:11.431 [2024-11-06 15:15:40.581624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:11.431 15:15:40 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.690 [2024-11-06 15:15:40.846571] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.690 15:15:40 -- host/timeout.sh@92 -- # wait 73824 00:17:12.625 [2024-11-06 15:15:41.592404] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.190 00:17:19.190 Latency(us) 00:17:19.190 [2024-11-06T15:15:48.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.190 [2024-11-06T15:15:48.465Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.190 Verification LBA range: start 0x0 length 0x4000 00:17:19.190 NVMe0n1 : 10.01 9862.87 38.53 0.00 0.00 12952.75 1020.28 3019898.88 00:17:19.190 [2024-11-06T15:15:48.465Z] =================================================================================================================== 00:17:19.190 [2024-11-06T15:15:48.465Z] Total : 9862.87 38.53 0.00 0.00 12952.75 1020.28 3019898.88 00:17:19.190 0 00:17:19.190 15:15:48 -- host/timeout.sh@97 -- # rpc_pid=73933 00:17:19.190 15:15:48 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.190 15:15:48 -- host/timeout.sh@98 -- # sleep 1 00:17:19.448 Running I/O for 10 seconds... 00:17:20.384 15:15:49 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.645 [2024-11-06 15:15:49.717631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.717943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eb4a0 is same with the state(5) to be set 00:17:20.645 [2024-11-06 15:15:49.718004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.645 [2024-11-06 15:15:49.718358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.645 [2024-11-06 15:15:49.718367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.718919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.718981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.718992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.646 [2024-11-06 15:15:49.719164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.646 [2024-11-06 15:15:49.719184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.646 [2024-11-06 15:15:49.719195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.719881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.719985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.719997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.720006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.720018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.647 [2024-11-06 15:15:49.720027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.647 [2024-11-06 15:15:49.720039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.647 [2024-11-06 15:15:49.720049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.648 [2024-11-06 15:15:49.720607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.648 [2024-11-06 15:15:49.720727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2106c30 is same with the state(5) to be set 00:17:20.648 [2024-11-06 15:15:49.720750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.648 [2024-11-06 15:15:49.720758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.648 [2024-11-06 15:15:49.720766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128776 len:8 PRP1 0x0 PRP2 0x0 00:17:20.648 [2024-11-06 15:15:49.720776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720820] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2106c30 was disconnected and freed. reset controller. 00:17:20.648 [2024-11-06 15:15:49.720905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.648 [2024-11-06 15:15:49.720923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.648 [2024-11-06 15:15:49.720943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.648 [2024-11-06 15:15:49.720962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.648 [2024-11-06 15:15:49.720981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.648 [2024-11-06 15:15:49.720990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:20.648 [2024-11-06 15:15:49.721211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.649 [2024-11-06 15:15:49.721388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:20.649 [2024-11-06 15:15:49.721502] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.649 [2024-11-06 15:15:49.721557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.649 [2024-11-06 15:15:49.721600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.649 [2024-11-06 15:15:49.721617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090010 with addr=10.0.0.2, port=4420 00:17:20.649 [2024-11-06 15:15:49.721628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:20.649 [2024-11-06 15:15:49.721648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:20.649 [2024-11-06 15:15:49.721684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.649 [2024-11-06 15:15:49.721696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:20.649 [2024-11-06 15:15:49.721727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.649 [2024-11-06 15:15:49.721751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.649 [2024-11-06 15:15:49.721764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.649 15:15:49 -- host/timeout.sh@101 -- # sleep 3 00:17:21.584 [2024-11-06 15:15:50.721898] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.584 [2024-11-06 15:15:50.722253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.584 [2024-11-06 15:15:50.722443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.584 [2024-11-06 15:15:50.722507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090010 with addr=10.0.0.2, port=4420 00:17:21.584 [2024-11-06 15:15:50.722744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:21.584 [2024-11-06 15:15:50.722828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:21.584 [2024-11-06 15:15:50.722991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.584 [2024-11-06 15:15:50.723046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:21.584 [2024-11-06 15:15:50.723098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.584 [2024-11-06 15:15:50.723225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.584 [2024-11-06 15:15:50.723269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:22.519 [2024-11-06 15:15:51.723473] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:22.519 [2024-11-06 15:15:51.723852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:22.519 [2024-11-06 15:15:51.724037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:22.519 [2024-11-06 15:15:51.724097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090010 with addr=10.0.0.2, port=4420 00:17:22.519 [2024-11-06 15:15:51.724365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:22.519 [2024-11-06 15:15:51.724575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:22.519 [2024-11-06 15:15:51.724769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:22.519 [2024-11-06 15:15:51.724915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:22.519 [2024-11-06 15:15:51.725074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:22.519 [2024-11-06 15:15:51.725305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:22.519 [2024-11-06 15:15:51.725442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:23.455 [2024-11-06 15:15:52.726023] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:23.455 [2024-11-06 15:15:52.726385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:23.455 [2024-11-06 15:15:52.726485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:23.455 [2024-11-06 15:15:52.726598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090010 with addr=10.0.0.2, port=4420 00:17:23.455 [2024-11-06 15:15:52.726741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2090010 is same with the state(5) to be set 00:17:23.455 [2024-11-06 15:15:52.726951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2090010 (9): Bad file descriptor 00:17:23.455 [2024-11-06 15:15:52.727102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:23.455 [2024-11-06 15:15:52.727115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:23.455 [2024-11-06 15:15:52.727126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:23.455 [2024-11-06 15:15:52.729856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:23.455 [2024-11-06 15:15:52.729905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:23.714 15:15:52 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.973 [2024-11-06 15:15:53.040817] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.973 15:15:53 -- host/timeout.sh@103 -- # wait 73933 00:17:24.540 [2024-11-06 15:15:53.753579] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.810 00:17:29.810 Latency(us) 00:17:29.810 [2024-11-06T15:15:59.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.810 [2024-11-06T15:15:59.085Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:29.810 Verification LBA range: start 0x0 length 0x4000 00:17:29.810 NVMe0n1 : 10.01 8436.92 32.96 6125.71 0.00 8775.74 402.15 3019898.88 00:17:29.810 [2024-11-06T15:15:59.085Z] =================================================================================================================== 00:17:29.810 [2024-11-06T15:15:59.085Z] Total : 8436.92 32.96 6125.71 0.00 8775.74 0.00 3019898.88 00:17:29.810 0 00:17:29.810 15:15:58 -- host/timeout.sh@105 -- # killprocess 73805 00:17:29.810 15:15:58 -- common/autotest_common.sh@936 -- # '[' -z 73805 ']' 00:17:29.810 15:15:58 -- common/autotest_common.sh@940 -- # kill -0 73805 00:17:29.810 15:15:58 -- common/autotest_common.sh@941 -- # uname 00:17:29.810 15:15:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.810 15:15:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73805 00:17:29.810 killing process with pid 73805 00:17:29.810 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.810 00:17:29.810 Latency(us) 00:17:29.810 [2024-11-06T15:15:59.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.810 [2024-11-06T15:15:59.085Z] =================================================================================================================== 00:17:29.810 [2024-11-06T15:15:59.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.810 15:15:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:29.810 15:15:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:29.810 15:15:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73805' 00:17:29.810 15:15:58 -- common/autotest_common.sh@955 -- # kill 73805 00:17:29.810 15:15:58 -- common/autotest_common.sh@960 -- # wait 73805 00:17:29.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.810 15:15:58 -- host/timeout.sh@110 -- # bdevperf_pid=74043 00:17:29.810 15:15:58 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:17:29.810 15:15:58 -- host/timeout.sh@112 -- # waitforlisten 74043 /var/tmp/bdevperf.sock 00:17:29.810 15:15:58 -- common/autotest_common.sh@829 -- # '[' -z 74043 ']' 00:17:29.810 15:15:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.810 15:15:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.810 15:15:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.810 15:15:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.810 15:15:58 -- common/autotest_common.sh@10 -- # set +x 00:17:29.810 [2024-11-06 15:15:58.870872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:29.810 [2024-11-06 15:15:58.871153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74043 ] 00:17:29.810 [2024-11-06 15:15:59.006526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.810 [2024-11-06 15:15:59.062620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.746 15:15:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.746 15:15:59 -- common/autotest_common.sh@862 -- # return 0 00:17:30.746 15:15:59 -- host/timeout.sh@116 -- # dtrace_pid=74059 00:17:30.746 15:15:59 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 74043 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:17:30.746 15:15:59 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:17:31.005 15:16:00 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:31.263 NVMe0n1 00:17:31.263 15:16:00 -- host/timeout.sh@124 -- # rpc_pid=74106 00:17:31.263 15:16:00 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.263 15:16:00 -- host/timeout.sh@125 -- # sleep 1 00:17:31.522 Running I/O for 10 seconds... 00:17:32.465 15:16:01 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.726 [2024-11-06 15:16:01.792104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.726 [2024-11-06 15:16:01.792868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.726 [2024-11-06 15:16:01.792878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.792889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.792899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.792910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.792919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.792930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.792940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.792952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.792961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.792973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.792982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.792993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.727 [2024-11-06 15:16:01.793743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.727 [2024-11-06 15:16:01.793754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.793983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.793993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.728 [2024-11-06 15:16:01.794500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.728 [2024-11-06 15:16:01.794517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.794989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.794999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.795010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.795019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.795030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.729 [2024-11-06 15:16:01.795040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.795051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6370c0 is same with the state(5) to be set 00:17:32.729 [2024-11-06 15:16:01.795064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.729 [2024-11-06 15:16:01.795072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.729 [2024-11-06 15:16:01.795081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32560 len:8 PRP1 0x0 PRP2 0x0 00:17:32.729 [2024-11-06 15:16:01.795090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.729 [2024-11-06 15:16:01.795151] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6370c0 was disconnected and freed. reset controller. 00:17:32.729 [2024-11-06 15:16:01.795444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:32.729 [2024-11-06 15:16:01.795526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:17:32.729 [2024-11-06 15:16:01.795635] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.729 [2024-11-06 15:16:01.795748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.729 [2024-11-06 15:16:01.795794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.729 [2024-11-06 15:16:01.795811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4010 with addr=10.0.0.2, port=4420 00:17:32.729 [2024-11-06 15:16:01.795822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:17:32.729 [2024-11-06 15:16:01.795845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:17:32.729 [2024-11-06 15:16:01.795862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.729 [2024-11-06 15:16:01.795872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:32.729 [2024-11-06 15:16:01.795884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.729 [2024-11-06 15:16:01.795905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:32.729 [2024-11-06 15:16:01.795916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:32.729 15:16:01 -- host/timeout.sh@128 -- # wait 74106 00:17:34.633 [2024-11-06 15:16:03.796054] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.633 [2024-11-06 15:16:03.796155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.633 [2024-11-06 15:16:03.796199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.633 [2024-11-06 15:16:03.796216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4010 with addr=10.0.0.2, port=4420 00:17:34.633 [2024-11-06 15:16:03.796228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:17:34.633 [2024-11-06 15:16:03.796254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:17:34.633 [2024-11-06 15:16:03.796273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:34.633 [2024-11-06 15:16:03.796283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:34.633 [2024-11-06 15:16:03.796293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:34.633 [2024-11-06 15:16:03.796319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:34.633 [2024-11-06 15:16:03.796330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.538 [2024-11-06 15:16:05.796504] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.538 [2024-11-06 15:16:05.796880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.538 [2024-11-06 15:16:05.796939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.538 [2024-11-06 15:16:05.796957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4010 with addr=10.0.0.2, port=4420 00:17:36.538 [2024-11-06 15:16:05.796971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:17:36.538 [2024-11-06 15:16:05.797006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:17:36.538 [2024-11-06 15:16:05.797026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:36.538 [2024-11-06 15:16:05.797036] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:36.538 [2024-11-06 15:16:05.797047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:36.538 [2024-11-06 15:16:05.797075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.538 [2024-11-06 15:16:05.797101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:39.072 [2024-11-06 15:16:07.797187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.072 [2024-11-06 15:16:07.797245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.072 [2024-11-06 15:16:07.797259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:39.072 [2024-11-06 15:16:07.797269] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:39.072 [2024-11-06 15:16:07.797296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:39.640 00:17:39.640 Latency(us) 00:17:39.640 [2024-11-06T15:16:08.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.640 [2024-11-06T15:16:08.915Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:17:39.640 NVMe0n1 : 8.15 2213.63 8.65 15.71 0.00 57368.85 7328.12 7015926.69 00:17:39.640 [2024-11-06T15:16:08.915Z] =================================================================================================================== 00:17:39.640 [2024-11-06T15:16:08.915Z] Total : 2213.63 8.65 15.71 0.00 57368.85 7328.12 7015926.69 00:17:39.640 0 00:17:39.640 15:16:08 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:39.640 Attaching 5 probes... 00:17:39.640 1321.922967: reset bdev controller NVMe0 00:17:39.640 1322.055829: reconnect bdev controller NVMe0 00:17:39.640 3322.422391: reconnect delay bdev controller NVMe0 00:17:39.640 3322.440609: reconnect bdev controller NVMe0 00:17:39.640 5322.844473: reconnect delay bdev controller NVMe0 00:17:39.640 5322.866496: reconnect bdev controller NVMe0 00:17:39.640 7323.645580: reconnect delay bdev controller NVMe0 00:17:39.640 7323.666560: reconnect bdev controller NVMe0 00:17:39.640 15:16:08 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:17:39.640 15:16:08 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:17:39.640 15:16:08 -- host/timeout.sh@136 -- # kill 74059 00:17:39.640 15:16:08 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:39.640 15:16:08 -- host/timeout.sh@139 -- # killprocess 74043 00:17:39.640 15:16:08 -- common/autotest_common.sh@936 -- # '[' -z 74043 ']' 00:17:39.641 15:16:08 -- common/autotest_common.sh@940 -- # kill -0 74043 00:17:39.641 15:16:08 -- common/autotest_common.sh@941 -- # uname 00:17:39.641 15:16:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.641 15:16:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74043 00:17:39.641 killing process with pid 74043 00:17:39.641 Received shutdown signal, test time was about 8.216334 seconds 00:17:39.641 00:17:39.641 Latency(us) 00:17:39.641 [2024-11-06T15:16:08.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.641 [2024-11-06T15:16:08.916Z] =================================================================================================================== 00:17:39.641 [2024-11-06T15:16:08.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.641 15:16:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.641 15:16:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.641 15:16:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74043' 00:17:39.641 15:16:08 -- common/autotest_common.sh@955 -- # kill 74043 00:17:39.641 15:16:08 -- common/autotest_common.sh@960 -- # wait 74043 00:17:39.899 15:16:09 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.158 15:16:09 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:17:40.158 15:16:09 -- host/timeout.sh@145 -- # nvmftestfini 00:17:40.158 15:16:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:40.158 15:16:09 -- nvmf/common.sh@116 -- # sync 00:17:40.158 15:16:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:40.158 15:16:09 -- nvmf/common.sh@119 -- # set +e 00:17:40.158 15:16:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:40.158 15:16:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:40.158 rmmod nvme_tcp 00:17:40.158 rmmod nvme_fabrics 00:17:40.158 rmmod nvme_keyring 00:17:40.158 15:16:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:40.158 15:16:09 -- nvmf/common.sh@123 -- # set -e 00:17:40.158 15:16:09 -- nvmf/common.sh@124 -- # return 0 00:17:40.158 15:16:09 -- nvmf/common.sh@477 -- # '[' -n 73605 ']' 00:17:40.158 15:16:09 -- nvmf/common.sh@478 -- # killprocess 73605 00:17:40.158 15:16:09 -- common/autotest_common.sh@936 -- # '[' -z 73605 ']' 00:17:40.158 15:16:09 -- common/autotest_common.sh@940 -- # kill -0 73605 00:17:40.158 15:16:09 -- common/autotest_common.sh@941 -- # uname 00:17:40.158 15:16:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.158 15:16:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73605 00:17:40.417 killing process with pid 73605 00:17:40.417 15:16:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:40.417 15:16:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:40.417 15:16:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73605' 00:17:40.417 15:16:09 -- common/autotest_common.sh@955 -- # kill 73605 00:17:40.417 15:16:09 -- common/autotest_common.sh@960 -- # wait 73605 00:17:40.417 15:16:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:40.417 15:16:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:40.417 15:16:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:40.417 15:16:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.417 15:16:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:40.417 15:16:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.417 15:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.417 15:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.417 15:16:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:40.417 ************************************ 00:17:40.417 END TEST nvmf_timeout 00:17:40.417 ************************************ 00:17:40.417 00:17:40.417 real 0m47.099s 00:17:40.417 user 2m18.955s 00:17:40.417 sys 0m5.225s 00:17:40.417 15:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:40.417 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:17:40.676 15:16:09 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:17:40.676 15:16:09 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:17:40.676 15:16:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.676 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:17:40.676 15:16:09 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:17:40.676 00:17:40.676 real 10m36.802s 00:17:40.676 user 29m41.601s 00:17:40.676 sys 3m21.858s 00:17:40.676 ************************************ 00:17:40.676 END TEST nvmf_tcp 00:17:40.676 ************************************ 00:17:40.676 15:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:40.676 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:17:40.676 15:16:09 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:17:40.676 15:16:09 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:17:40.676 15:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:40.676 15:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.676 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:17:40.676 ************************************ 00:17:40.676 START TEST nvmf_dif 00:17:40.676 ************************************ 00:17:40.676 15:16:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:17:40.676 * Looking for test storage... 00:17:40.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:40.676 15:16:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:40.676 15:16:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:40.676 15:16:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:40.935 15:16:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:40.935 15:16:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:40.935 15:16:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:40.935 15:16:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:40.935 15:16:09 -- scripts/common.sh@335 -- # IFS=.-: 00:17:40.935 15:16:09 -- scripts/common.sh@335 -- # read -ra ver1 00:17:40.935 15:16:09 -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.935 15:16:09 -- scripts/common.sh@336 -- # read -ra ver2 00:17:40.935 15:16:09 -- scripts/common.sh@337 -- # local 'op=<' 00:17:40.935 15:16:09 -- scripts/common.sh@339 -- # ver1_l=2 00:17:40.935 15:16:09 -- scripts/common.sh@340 -- # ver2_l=1 00:17:40.935 15:16:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:40.935 15:16:09 -- scripts/common.sh@343 -- # case "$op" in 00:17:40.935 15:16:09 -- scripts/common.sh@344 -- # : 1 00:17:40.935 15:16:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:40.935 15:16:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.935 15:16:09 -- scripts/common.sh@364 -- # decimal 1 00:17:40.935 15:16:09 -- scripts/common.sh@352 -- # local d=1 00:17:40.935 15:16:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.935 15:16:09 -- scripts/common.sh@354 -- # echo 1 00:17:40.935 15:16:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:40.935 15:16:10 -- scripts/common.sh@365 -- # decimal 2 00:17:40.935 15:16:10 -- scripts/common.sh@352 -- # local d=2 00:17:40.935 15:16:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.935 15:16:10 -- scripts/common.sh@354 -- # echo 2 00:17:40.935 15:16:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:40.935 15:16:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:40.935 15:16:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:40.935 15:16:10 -- scripts/common.sh@367 -- # return 0 00:17:40.935 15:16:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.935 15:16:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.935 --rc genhtml_branch_coverage=1 00:17:40.935 --rc genhtml_function_coverage=1 00:17:40.935 --rc genhtml_legend=1 00:17:40.935 --rc geninfo_all_blocks=1 00:17:40.935 --rc geninfo_unexecuted_blocks=1 00:17:40.935 00:17:40.935 ' 00:17:40.935 15:16:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.935 --rc genhtml_branch_coverage=1 00:17:40.935 --rc genhtml_function_coverage=1 00:17:40.935 --rc genhtml_legend=1 00:17:40.935 --rc geninfo_all_blocks=1 00:17:40.935 --rc geninfo_unexecuted_blocks=1 00:17:40.935 00:17:40.935 ' 00:17:40.935 15:16:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.935 --rc genhtml_branch_coverage=1 00:17:40.935 --rc genhtml_function_coverage=1 00:17:40.935 --rc genhtml_legend=1 00:17:40.935 --rc geninfo_all_blocks=1 00:17:40.935 --rc geninfo_unexecuted_blocks=1 00:17:40.935 00:17:40.935 ' 00:17:40.935 15:16:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:40.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.935 --rc genhtml_branch_coverage=1 00:17:40.936 --rc genhtml_function_coverage=1 00:17:40.936 --rc genhtml_legend=1 00:17:40.936 --rc geninfo_all_blocks=1 00:17:40.936 --rc geninfo_unexecuted_blocks=1 00:17:40.936 00:17:40.936 ' 00:17:40.936 15:16:10 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.936 15:16:10 -- nvmf/common.sh@7 -- # uname -s 00:17:40.936 15:16:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.936 15:16:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.936 15:16:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.936 15:16:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.936 15:16:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.936 15:16:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.936 15:16:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.936 15:16:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.936 15:16:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.936 15:16:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.936 15:16:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:17:40.936 15:16:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:17:40.936 15:16:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.936 15:16:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.936 15:16:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.936 15:16:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.936 15:16:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.936 15:16:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.936 15:16:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.936 15:16:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.936 15:16:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.936 15:16:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.936 15:16:10 -- paths/export.sh@5 -- # export PATH 00:17:40.936 15:16:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.936 15:16:10 -- nvmf/common.sh@46 -- # : 0 00:17:40.936 15:16:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.936 15:16:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.936 15:16:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.936 15:16:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.936 15:16:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.936 15:16:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.936 15:16:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.936 15:16:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.936 15:16:10 -- target/dif.sh@15 -- # NULL_META=16 00:17:40.936 15:16:10 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:17:40.936 15:16:10 -- target/dif.sh@15 -- # NULL_SIZE=64 00:17:40.936 15:16:10 -- target/dif.sh@15 -- # NULL_DIF=1 00:17:40.936 15:16:10 -- target/dif.sh@135 -- # nvmftestinit 00:17:40.936 15:16:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.936 15:16:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.936 15:16:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.936 15:16:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.936 15:16:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.936 15:16:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.936 15:16:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:17:40.936 15:16:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.936 15:16:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:40.936 15:16:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:40.936 15:16:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:40.936 15:16:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:40.936 15:16:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:40.936 15:16:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:40.936 15:16:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.936 15:16:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.936 15:16:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.936 15:16:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:40.936 15:16:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.936 15:16:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.936 15:16:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.936 15:16:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.936 15:16:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.936 15:16:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.936 15:16:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.936 15:16:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.936 15:16:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:40.936 15:16:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:40.936 Cannot find device "nvmf_tgt_br" 00:17:40.936 15:16:10 -- nvmf/common.sh@154 -- # true 00:17:40.936 15:16:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.936 Cannot find device "nvmf_tgt_br2" 00:17:40.936 15:16:10 -- nvmf/common.sh@155 -- # true 00:17:40.936 15:16:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:40.936 15:16:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:40.936 Cannot find device "nvmf_tgt_br" 00:17:40.936 15:16:10 -- nvmf/common.sh@157 -- # true 00:17:40.936 15:16:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:40.936 Cannot find device "nvmf_tgt_br2" 00:17:40.936 15:16:10 -- nvmf/common.sh@158 -- # true 00:17:40.936 15:16:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:40.936 15:16:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:40.936 15:16:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.936 15:16:10 -- nvmf/common.sh@161 -- # true 00:17:40.936 15:16:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.936 15:16:10 -- nvmf/common.sh@162 -- # true 00:17:40.936 15:16:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.936 15:16:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.936 15:16:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.195 15:16:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.195 15:16:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.195 15:16:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.195 15:16:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.195 15:16:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.195 15:16:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.195 15:16:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:41.195 15:16:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:41.195 15:16:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:41.195 15:16:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:41.195 15:16:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.195 15:16:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.195 15:16:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.195 15:16:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:41.195 15:16:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:41.195 15:16:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.195 15:16:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.195 15:16:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.195 15:16:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.195 15:16:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.195 15:16:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:41.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:41.195 00:17:41.195 --- 10.0.0.2 ping statistics --- 00:17:41.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.196 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:41.196 15:16:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:41.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:41.196 00:17:41.196 --- 10.0.0.3 ping statistics --- 00:17:41.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.196 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:41.196 15:16:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:41.196 00:17:41.196 --- 10.0.0.1 ping statistics --- 00:17:41.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.196 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:41.196 15:16:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.196 15:16:10 -- nvmf/common.sh@421 -- # return 0 00:17:41.196 15:16:10 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:17:41.196 15:16:10 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:41.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:41.454 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:41.759 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:41.759 15:16:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.759 15:16:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:41.759 15:16:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:41.759 15:16:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.759 15:16:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:41.759 15:16:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:41.759 15:16:10 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:17:41.759 15:16:10 -- target/dif.sh@137 -- # nvmfappstart 00:17:41.759 15:16:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.759 15:16:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.759 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 15:16:10 -- nvmf/common.sh@469 -- # nvmfpid=74551 00:17:41.759 15:16:10 -- nvmf/common.sh@470 -- # waitforlisten 74551 00:17:41.759 15:16:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:41.759 15:16:10 -- common/autotest_common.sh@829 -- # '[' -z 74551 ']' 00:17:41.759 15:16:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.759 15:16:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.759 15:16:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.759 15:16:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.759 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 [2024-11-06 15:16:10.853280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.759 [2024-11-06 15:16:10.853391] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.759 [2024-11-06 15:16:10.995838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.042 [2024-11-06 15:16:11.065065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.042 [2024-11-06 15:16:11.065244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.042 [2024-11-06 15:16:11.065259] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.042 [2024-11-06 15:16:11.065270] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.042 [2024-11-06 15:16:11.065306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.978 15:16:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.978 15:16:11 -- common/autotest_common.sh@862 -- # return 0 00:17:42.978 15:16:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.978 15:16:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 15:16:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.978 15:16:11 -- target/dif.sh@139 -- # create_transport 00:17:42.978 15:16:11 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:17:42.978 15:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 [2024-11-06 15:16:11.933396] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.978 15:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 15:16:11 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:17:42.978 15:16:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:42.978 15:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 ************************************ 00:17:42.978 START TEST fio_dif_1_default 00:17:42.978 ************************************ 00:17:42.978 15:16:11 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:17:42.978 15:16:11 -- target/dif.sh@86 -- # create_subsystems 0 00:17:42.978 15:16:11 -- target/dif.sh@28 -- # local sub 00:17:42.978 15:16:11 -- target/dif.sh@30 -- # for sub in "$@" 00:17:42.978 15:16:11 -- target/dif.sh@31 -- # create_subsystem 0 00:17:42.978 15:16:11 -- target/dif.sh@18 -- # local sub_id=0 00:17:42.978 15:16:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:42.978 15:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 bdev_null0 00:17:42.978 15:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 15:16:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:42.978 15:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 15:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 15:16:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:42.978 15:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 15:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 15:16:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.978 15:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 [2024-11-06 15:16:11.977503] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.978 15:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 15:16:11 -- target/dif.sh@87 -- # fio /dev/fd/62 00:17:42.978 15:16:11 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:17:42.978 15:16:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:42.978 15:16:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:42.978 15:16:11 -- target/dif.sh@82 -- # gen_fio_conf 00:17:42.978 15:16:11 -- target/dif.sh@54 -- # local file 00:17:42.978 15:16:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:42.978 15:16:11 -- target/dif.sh@56 -- # cat 00:17:42.978 15:16:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:42.978 15:16:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:42.978 15:16:11 -- nvmf/common.sh@520 -- # config=() 00:17:42.978 15:16:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:42.978 15:16:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.978 15:16:11 -- common/autotest_common.sh@1330 -- # shift 00:17:42.978 15:16:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:42.978 15:16:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.978 15:16:11 -- nvmf/common.sh@520 -- # local subsystem config 00:17:42.978 15:16:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:42.978 15:16:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:42.978 { 00:17:42.978 "params": { 00:17:42.978 "name": "Nvme$subsystem", 00:17:42.978 "trtype": "$TEST_TRANSPORT", 00:17:42.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.978 "adrfam": "ipv4", 00:17:42.978 "trsvcid": "$NVMF_PORT", 00:17:42.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.978 "hdgst": ${hdgst:-false}, 00:17:42.978 "ddgst": ${ddgst:-false} 00:17:42.978 }, 00:17:42.978 "method": "bdev_nvme_attach_controller" 00:17:42.978 } 00:17:42.978 EOF 00:17:42.978 )") 00:17:42.978 15:16:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:42.978 15:16:11 -- target/dif.sh@72 -- # (( file <= files )) 00:17:42.978 15:16:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.978 15:16:11 -- nvmf/common.sh@542 -- # cat 00:17:42.978 15:16:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:42.978 15:16:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:42.978 15:16:11 -- nvmf/common.sh@544 -- # jq . 00:17:42.978 15:16:11 -- nvmf/common.sh@545 -- # IFS=, 00:17:42.978 15:16:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:42.978 "params": { 00:17:42.978 "name": "Nvme0", 00:17:42.978 "trtype": "tcp", 00:17:42.978 "traddr": "10.0.0.2", 00:17:42.978 "adrfam": "ipv4", 00:17:42.978 "trsvcid": "4420", 00:17:42.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:42.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:42.978 "hdgst": false, 00:17:42.978 "ddgst": false 00:17:42.978 }, 00:17:42.978 "method": "bdev_nvme_attach_controller" 00:17:42.978 }' 00:17:42.978 15:16:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:42.978 15:16:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:42.978 15:16:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.978 15:16:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.978 15:16:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:42.978 15:16:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:42.978 15:16:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:42.978 15:16:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:42.978 15:16:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:42.978 15:16:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:42.978 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:42.978 fio-3.35 00:17:42.978 Starting 1 thread 00:17:43.546 [2024-11-06 15:16:12.557705] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:43.546 [2024-11-06 15:16:12.557787] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:53.523 00:17:53.523 filename0: (groupid=0, jobs=1): err= 0: pid=74619: Wed Nov 6 15:16:22 2024 00:17:53.523 read: IOPS=9294, BW=36.3MiB/s (38.1MB/s)(363MiB/10001msec) 00:17:53.523 slat (usec): min=5, max=127, avg= 8.46, stdev= 3.96 00:17:53.523 clat (usec): min=320, max=4852, avg=405.00, stdev=54.61 00:17:53.523 lat (usec): min=326, max=4880, avg=413.46, stdev=55.48 00:17:53.523 clat percentiles (usec): 00:17:53.523 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:17:53.523 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 408], 00:17:53.523 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 469], 95.00th=[ 490], 00:17:53.523 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 627], 00:17:53.523 | 99.99th=[ 1123] 00:17:53.523 bw ( KiB/s): min=35264, max=38688, per=100.00%, avg=37217.68, stdev=772.10, samples=19 00:17:53.523 iops : min= 8816, max= 9672, avg=9304.42, stdev=193.02, samples=19 00:17:53.523 lat (usec) : 500=96.70%, 750=3.28%, 1000=0.01% 00:17:53.523 lat (msec) : 2=0.01%, 10=0.01% 00:17:53.523 cpu : usr=84.63%, sys=13.39%, ctx=28, majf=0, minf=9 00:17:53.523 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:53.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.523 issued rwts: total=92956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:53.523 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:53.523 00:17:53.523 Run status group 0 (all jobs): 00:17:53.523 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=363MiB (381MB), run=10001-10001msec 00:17:53.782 15:16:22 -- target/dif.sh@88 -- # destroy_subsystems 0 00:17:53.782 15:16:22 -- target/dif.sh@43 -- # local sub 00:17:53.782 15:16:22 -- target/dif.sh@45 -- # for sub in "$@" 00:17:53.782 15:16:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:53.782 15:16:22 -- target/dif.sh@36 -- # local sub_id=0 00:17:53.782 15:16:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 00:17:53.782 real 0m10.936s 00:17:53.782 user 0m9.086s 00:17:53.782 sys 0m1.570s 00:17:53.782 15:16:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 ************************************ 00:17:53.782 END TEST fio_dif_1_default 00:17:53.782 ************************************ 00:17:53.782 15:16:22 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:17:53.782 15:16:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:53.782 15:16:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 ************************************ 00:17:53.782 START TEST fio_dif_1_multi_subsystems 00:17:53.782 ************************************ 00:17:53.782 15:16:22 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:17:53.782 15:16:22 -- target/dif.sh@92 -- # local files=1 00:17:53.782 15:16:22 -- target/dif.sh@94 -- # create_subsystems 0 1 00:17:53.782 15:16:22 -- target/dif.sh@28 -- # local sub 00:17:53.782 15:16:22 -- target/dif.sh@30 -- # for sub in "$@" 00:17:53.782 15:16:22 -- target/dif.sh@31 -- # create_subsystem 0 00:17:53.782 15:16:22 -- target/dif.sh@18 -- # local sub_id=0 00:17:53.782 15:16:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 bdev_null0 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 [2024-11-06 15:16:22.966800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@30 -- # for sub in "$@" 00:17:53.782 15:16:22 -- target/dif.sh@31 -- # create_subsystem 1 00:17:53.782 15:16:22 -- target/dif.sh@18 -- # local sub_id=1 00:17:53.782 15:16:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 bdev_null1 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.782 15:16:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:17:53.782 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.782 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.782 15:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.783 15:16:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.783 15:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.783 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.783 15:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.783 15:16:23 -- target/dif.sh@95 -- # fio /dev/fd/62 00:17:53.783 15:16:23 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:17:53.783 15:16:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:17:53.783 15:16:23 -- nvmf/common.sh@520 -- # config=() 00:17:53.783 15:16:23 -- nvmf/common.sh@520 -- # local subsystem config 00:17:53.783 15:16:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:53.783 15:16:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:53.783 { 00:17:53.783 "params": { 00:17:53.783 "name": "Nvme$subsystem", 00:17:53.783 "trtype": "$TEST_TRANSPORT", 00:17:53.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.783 "adrfam": "ipv4", 00:17:53.783 "trsvcid": "$NVMF_PORT", 00:17:53.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.783 "hdgst": ${hdgst:-false}, 00:17:53.783 "ddgst": ${ddgst:-false} 00:17:53.783 }, 00:17:53.783 "method": "bdev_nvme_attach_controller" 00:17:53.783 } 00:17:53.783 EOF 00:17:53.783 )") 00:17:53.783 15:16:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:53.783 15:16:23 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:53.783 15:16:23 -- target/dif.sh@82 -- # gen_fio_conf 00:17:53.783 15:16:23 -- target/dif.sh@54 -- # local file 00:17:53.783 15:16:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:53.783 15:16:23 -- target/dif.sh@56 -- # cat 00:17:53.783 15:16:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.783 15:16:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:53.783 15:16:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.783 15:16:23 -- common/autotest_common.sh@1330 -- # shift 00:17:53.783 15:16:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:53.783 15:16:23 -- nvmf/common.sh@542 -- # cat 00:17:53.783 15:16:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.783 15:16:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.783 15:16:23 -- target/dif.sh@72 -- # (( file <= files )) 00:17:53.783 15:16:23 -- target/dif.sh@73 -- # cat 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:53.783 15:16:23 -- target/dif.sh@72 -- # (( file++ )) 00:17:53.783 15:16:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:53.783 15:16:23 -- target/dif.sh@72 -- # (( file <= files )) 00:17:53.783 15:16:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:53.783 { 00:17:53.783 "params": { 00:17:53.783 "name": "Nvme$subsystem", 00:17:53.783 "trtype": "$TEST_TRANSPORT", 00:17:53.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.783 "adrfam": "ipv4", 00:17:53.783 "trsvcid": "$NVMF_PORT", 00:17:53.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.783 "hdgst": ${hdgst:-false}, 00:17:53.783 "ddgst": ${ddgst:-false} 00:17:53.783 }, 00:17:53.783 "method": "bdev_nvme_attach_controller" 00:17:53.783 } 00:17:53.783 EOF 00:17:53.783 )") 00:17:53.783 15:16:23 -- nvmf/common.sh@542 -- # cat 00:17:53.783 15:16:23 -- nvmf/common.sh@544 -- # jq . 00:17:53.783 15:16:23 -- nvmf/common.sh@545 -- # IFS=, 00:17:53.783 15:16:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:53.783 "params": { 00:17:53.783 "name": "Nvme0", 00:17:53.783 "trtype": "tcp", 00:17:53.783 "traddr": "10.0.0.2", 00:17:53.783 "adrfam": "ipv4", 00:17:53.783 "trsvcid": "4420", 00:17:53.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:53.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:53.783 "hdgst": false, 00:17:53.783 "ddgst": false 00:17:53.783 }, 00:17:53.783 "method": "bdev_nvme_attach_controller" 00:17:53.783 },{ 00:17:53.783 "params": { 00:17:53.783 "name": "Nvme1", 00:17:53.783 "trtype": "tcp", 00:17:53.783 "traddr": "10.0.0.2", 00:17:53.783 "adrfam": "ipv4", 00:17:53.783 "trsvcid": "4420", 00:17:53.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.783 "hdgst": false, 00:17:53.783 "ddgst": false 00:17:53.783 }, 00:17:53.783 "method": "bdev_nvme_attach_controller" 00:17:53.783 }' 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:53.783 15:16:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:53.783 15:16:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:53.783 15:16:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:54.040 15:16:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:54.040 15:16:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:54.040 15:16:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:54.040 15:16:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:54.040 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:54.040 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:54.040 fio-3.35 00:17:54.040 Starting 2 threads 00:17:54.606 [2024-11-06 15:16:23.598732] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:54.606 [2024-11-06 15:16:23.598798] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:04.606 00:18:04.606 filename0: (groupid=0, jobs=1): err= 0: pid=74779: Wed Nov 6 15:16:33 2024 00:18:04.606 read: IOPS=5033, BW=19.7MiB/s (20.6MB/s)(197MiB/10001msec) 00:18:04.606 slat (nsec): min=5330, max=69762, avg=13325.03, stdev=5088.18 00:18:04.606 clat (usec): min=566, max=5314, avg=758.37, stdev=83.65 00:18:04.606 lat (usec): min=575, max=5338, avg=771.70, stdev=84.50 00:18:04.606 clat percentiles (usec): 00:18:04.606 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 701], 00:18:04.606 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:18:04.606 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:18:04.606 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 996], 99.95th=[ 1012], 00:18:04.606 | 99.99th=[ 3163] 00:18:04.606 bw ( KiB/s): min=19552, max=21089, per=50.03%, avg=20152.53, stdev=433.37, samples=19 00:18:04.606 iops : min= 4888, max= 5272, avg=5038.11, stdev=108.29, samples=19 00:18:04.606 lat (usec) : 750=50.52%, 1000=49.38% 00:18:04.606 lat (msec) : 2=0.08%, 4=0.01%, 10=0.01% 00:18:04.606 cpu : usr=90.28%, sys=8.24%, ctx=76, majf=0, minf=0 00:18:04.606 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.606 issued rwts: total=50340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.606 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:04.606 filename1: (groupid=0, jobs=1): err= 0: pid=74780: Wed Nov 6 15:16:33 2024 00:18:04.606 read: IOPS=5036, BW=19.7MiB/s (20.6MB/s)(197MiB/10001msec) 00:18:04.606 slat (nsec): min=6322, max=73153, avg=13318.29, stdev=5076.73 00:18:04.606 clat (usec): min=380, max=4049, avg=757.17, stdev=72.73 00:18:04.606 lat (usec): min=387, max=4073, avg=770.49, stdev=73.53 00:18:04.606 clat percentiles (usec): 00:18:04.606 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 701], 00:18:04.606 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:18:04.606 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:18:04.606 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 1012], 00:18:04.606 | 99.99th=[ 1074] 00:18:04.606 bw ( KiB/s): min=19712, max=21089, per=50.06%, avg=20163.00, stdev=419.85, samples=19 00:18:04.606 iops : min= 4928, max= 5272, avg=5040.68, stdev=104.95, samples=19 00:18:04.606 lat (usec) : 500=0.04%, 750=51.39%, 1000=48.49% 00:18:04.606 lat (msec) : 2=0.07%, 10=0.01% 00:18:04.606 cpu : usr=90.07%, sys=8.55%, ctx=20, majf=0, minf=0 00:18:04.606 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.606 issued rwts: total=50368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.606 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:04.606 00:18:04.606 Run status group 0 (all jobs): 00:18:04.606 READ: bw=39.3MiB/s (41.2MB/s), 19.7MiB/s-19.7MiB/s (20.6MB/s-20.6MB/s), io=393MiB (412MB), run=10001-10001msec 00:18:04.865 15:16:33 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:04.865 15:16:33 -- target/dif.sh@43 -- # local sub 00:18:04.865 15:16:33 -- target/dif.sh@45 -- # for sub in "$@" 00:18:04.865 15:16:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:04.865 15:16:33 -- target/dif.sh@36 -- # local sub_id=0 00:18:04.865 15:16:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:04.865 15:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 15:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:04.865 15:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 15:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:33 -- target/dif.sh@45 -- # for sub in "$@" 00:18:04.865 15:16:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:04.865 15:16:33 -- target/dif.sh@36 -- # local sub_id=1 00:18:04.865 15:16:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.865 15:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 15:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:04.865 15:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 15:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 00:18:04.865 real 0m10.993s 00:18:04.865 user 0m18.712s 00:18:04.865 sys 0m1.906s 00:18:04.865 15:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:04.865 ************************************ 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 END TEST fio_dif_1_multi_subsystems 00:18:04.865 ************************************ 00:18:04.865 15:16:33 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:04.865 15:16:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:04.865 15:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 ************************************ 00:18:04.865 START TEST fio_dif_rand_params 00:18:04.865 ************************************ 00:18:04.865 15:16:33 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:18:04.865 15:16:33 -- target/dif.sh@100 -- # local NULL_DIF 00:18:04.865 15:16:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:04.865 15:16:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:18:04.865 15:16:33 -- target/dif.sh@103 -- # bs=128k 00:18:04.865 15:16:33 -- target/dif.sh@103 -- # numjobs=3 00:18:04.865 15:16:33 -- target/dif.sh@103 -- # iodepth=3 00:18:04.865 15:16:33 -- target/dif.sh@103 -- # runtime=5 00:18:04.865 15:16:33 -- target/dif.sh@105 -- # create_subsystems 0 00:18:04.865 15:16:33 -- target/dif.sh@28 -- # local sub 00:18:04.865 15:16:33 -- target/dif.sh@30 -- # for sub in "$@" 00:18:04.865 15:16:33 -- target/dif.sh@31 -- # create_subsystem 0 00:18:04.865 15:16:33 -- target/dif.sh@18 -- # local sub_id=0 00:18:04.865 15:16:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:04.865 15:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 bdev_null0 00:18:04.865 15:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:04.865 15:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 15:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:04.865 15:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 15:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:04.865 15:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.865 15:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 [2024-11-06 15:16:34.018353] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.865 15:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.865 15:16:34 -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:04.865 15:16:34 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:04.865 15:16:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:04.865 15:16:34 -- nvmf/common.sh@520 -- # config=() 00:18:04.865 15:16:34 -- nvmf/common.sh@520 -- # local subsystem config 00:18:04.865 15:16:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:04.865 15:16:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:04.865 15:16:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:04.865 { 00:18:04.865 "params": { 00:18:04.865 "name": "Nvme$subsystem", 00:18:04.865 "trtype": "$TEST_TRANSPORT", 00:18:04.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.865 "adrfam": "ipv4", 00:18:04.865 "trsvcid": "$NVMF_PORT", 00:18:04.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.866 "hdgst": ${hdgst:-false}, 00:18:04.866 "ddgst": ${ddgst:-false} 00:18:04.866 }, 00:18:04.866 "method": "bdev_nvme_attach_controller" 00:18:04.866 } 00:18:04.866 EOF 00:18:04.866 )") 00:18:04.866 15:16:34 -- target/dif.sh@82 -- # gen_fio_conf 00:18:04.866 15:16:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:04.866 15:16:34 -- target/dif.sh@54 -- # local file 00:18:04.866 15:16:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:04.866 15:16:34 -- target/dif.sh@56 -- # cat 00:18:04.866 15:16:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:04.866 15:16:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:04.866 15:16:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.866 15:16:34 -- common/autotest_common.sh@1330 -- # shift 00:18:04.866 15:16:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:04.866 15:16:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:04.866 15:16:34 -- nvmf/common.sh@542 -- # cat 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.866 15:16:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:04.866 15:16:34 -- target/dif.sh@72 -- # (( file <= files )) 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:04.866 15:16:34 -- nvmf/common.sh@544 -- # jq . 00:18:04.866 15:16:34 -- nvmf/common.sh@545 -- # IFS=, 00:18:04.866 15:16:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:04.866 "params": { 00:18:04.866 "name": "Nvme0", 00:18:04.866 "trtype": "tcp", 00:18:04.866 "traddr": "10.0.0.2", 00:18:04.866 "adrfam": "ipv4", 00:18:04.866 "trsvcid": "4420", 00:18:04.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:04.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:04.866 "hdgst": false, 00:18:04.866 "ddgst": false 00:18:04.866 }, 00:18:04.866 "method": "bdev_nvme_attach_controller" 00:18:04.866 }' 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:04.866 15:16:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:04.866 15:16:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:04.866 15:16:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:04.866 15:16:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:04.866 15:16:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:04.866 15:16:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:05.124 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:05.124 ... 00:18:05.124 fio-3.35 00:18:05.124 Starting 3 threads 00:18:05.382 [2024-11-06 15:16:34.572142] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:05.382 [2024-11-06 15:16:34.572235] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:10.647 00:18:10.647 filename0: (groupid=0, jobs=1): err= 0: pid=74936: Wed Nov 6 15:16:39 2024 00:18:10.647 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5003msec) 00:18:10.647 slat (nsec): min=6762, max=44487, avg=10258.53, stdev=4720.73 00:18:10.647 clat (usec): min=9770, max=12587, avg=11370.32, stdev=430.29 00:18:10.647 lat (usec): min=9778, max=12600, avg=11380.58, stdev=430.20 00:18:10.647 clat percentiles (usec): 00:18:10.647 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10945], 20.00th=[10945], 00:18:10.647 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11469], 00:18:10.647 | 70.00th=[11600], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:18:10.647 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12518], 99.95th=[12649], 00:18:10.647 | 99.99th=[12649] 00:18:10.647 bw ( KiB/s): min=33024, max=34560, per=33.35%, avg=33714.00, stdev=591.33, samples=9 00:18:10.647 iops : min= 258, max= 270, avg=263.33, stdev= 4.69, samples=9 00:18:10.647 lat (msec) : 10=0.23%, 20=99.77% 00:18:10.647 cpu : usr=91.86%, sys=7.56%, ctx=8, majf=0, minf=0 00:18:10.647 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.647 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:10.647 filename0: (groupid=0, jobs=1): err= 0: pid=74937: Wed Nov 6 15:16:39 2024 00:18:10.647 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5003msec) 00:18:10.647 slat (nsec): min=6620, max=45906, avg=10210.08, stdev=4501.90 00:18:10.647 clat (usec): min=10165, max=12797, avg=11370.52, stdev=422.01 00:18:10.647 lat (usec): min=10173, max=12824, avg=11380.73, stdev=422.24 00:18:10.647 clat percentiles (usec): 00:18:10.647 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10945], 20.00th=[10945], 00:18:10.647 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:18:10.647 | 70.00th=[11600], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:18:10.647 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12780], 99.95th=[12780], 00:18:10.647 | 99.99th=[12780] 00:18:10.647 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33699.11, stdev=254.16, samples=9 00:18:10.647 iops : min= 258, max= 264, avg=263.22, stdev= 1.99, samples=9 00:18:10.647 lat (msec) : 20=100.00% 00:18:10.647 cpu : usr=91.84%, sys=7.60%, ctx=10, majf=0, minf=0 00:18:10.647 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.647 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:10.647 filename0: (groupid=0, jobs=1): err= 0: pid=74938: Wed Nov 6 15:16:39 2024 00:18:10.647 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5003msec) 00:18:10.647 slat (nsec): min=6928, max=52852, avg=11102.99, stdev=5617.10 00:18:10.647 clat (usec): min=8172, max=13116, avg=11366.66, stdev=454.54 00:18:10.647 lat (usec): min=8180, max=13140, avg=11377.76, stdev=454.90 00:18:10.647 clat percentiles (usec): 00:18:10.647 | 1.00th=[10552], 5.00th=[10814], 10.00th=[10945], 20.00th=[10945], 00:18:10.647 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:18:10.647 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12125], 00:18:10.647 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13042], 99.95th=[13173], 00:18:10.647 | 99.99th=[13173] 00:18:10.647 bw ( KiB/s): min=33024, max=34560, per=33.36%, avg=33721.33, stdev=697.30, samples=9 00:18:10.647 iops : min= 258, max= 270, avg=263.33, stdev= 5.57, samples=9 00:18:10.647 lat (msec) : 10=0.23%, 20=99.77% 00:18:10.647 cpu : usr=90.96%, sys=8.38%, ctx=22, majf=0, minf=0 00:18:10.647 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.647 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:10.647 00:18:10.647 Run status group 0 (all jobs): 00:18:10.647 READ: bw=98.7MiB/s (104MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=494MiB (518MB), run=5003-5003msec 00:18:10.647 15:16:39 -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:10.647 15:16:39 -- target/dif.sh@43 -- # local sub 00:18:10.647 15:16:39 -- target/dif.sh@45 -- # for sub in "$@" 00:18:10.647 15:16:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:10.648 15:16:39 -- target/dif.sh@36 -- # local sub_id=0 00:18:10.648 15:16:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:10.648 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.648 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.648 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.648 15:16:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:10.648 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.648 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.648 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.648 15:16:39 -- target/dif.sh@109 -- # NULL_DIF=2 00:18:10.648 15:16:39 -- target/dif.sh@109 -- # bs=4k 00:18:10.648 15:16:39 -- target/dif.sh@109 -- # numjobs=8 00:18:10.648 15:16:39 -- target/dif.sh@109 -- # iodepth=16 00:18:10.648 15:16:39 -- target/dif.sh@109 -- # runtime= 00:18:10.648 15:16:39 -- target/dif.sh@109 -- # files=2 00:18:10.648 15:16:39 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:10.648 15:16:39 -- target/dif.sh@28 -- # local sub 00:18:10.648 15:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:18:10.648 15:16:39 -- target/dif.sh@31 -- # create_subsystem 0 00:18:10.648 15:16:39 -- target/dif.sh@18 -- # local sub_id=0 00:18:10.648 15:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:10.648 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.648 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.648 bdev_null0 00:18:10.648 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.648 15:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:10.648 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.648 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.648 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.648 15:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:10.648 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.648 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.648 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.648 15:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:10.648 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.648 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.648 [2024-11-06 15:16:39.922141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:18:10.906 15:16:39 -- target/dif.sh@31 -- # create_subsystem 1 00:18:10.906 15:16:39 -- target/dif.sh@18 -- # local sub_id=1 00:18:10.906 15:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 bdev_null1 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:18:10.906 15:16:39 -- target/dif.sh@31 -- # create_subsystem 2 00:18:10.906 15:16:39 -- target/dif.sh@18 -- # local sub_id=2 00:18:10.906 15:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 bdev_null2 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.906 15:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:10.906 15:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.906 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.906 15:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.907 15:16:39 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:10.907 15:16:39 -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:10.907 15:16:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:10.907 15:16:39 -- nvmf/common.sh@520 -- # config=() 00:18:10.907 15:16:39 -- nvmf/common.sh@520 -- # local subsystem config 00:18:10.907 15:16:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.907 15:16:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.907 { 00:18:10.907 "params": { 00:18:10.907 "name": "Nvme$subsystem", 00:18:10.907 "trtype": "$TEST_TRANSPORT", 00:18:10.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.907 "adrfam": "ipv4", 00:18:10.907 "trsvcid": "$NVMF_PORT", 00:18:10.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.907 "hdgst": ${hdgst:-false}, 00:18:10.907 "ddgst": ${ddgst:-false} 00:18:10.907 }, 00:18:10.907 "method": "bdev_nvme_attach_controller" 00:18:10.907 } 00:18:10.907 EOF 00:18:10.907 )") 00:18:10.907 15:16:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:10.907 15:16:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:10.907 15:16:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:10.907 15:16:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:10.907 15:16:39 -- nvmf/common.sh@542 -- # cat 00:18:10.907 15:16:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:10.907 15:16:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.907 15:16:40 -- common/autotest_common.sh@1330 -- # shift 00:18:10.907 15:16:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:10.907 15:16:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:10.907 15:16:40 -- target/dif.sh@82 -- # gen_fio_conf 00:18:10.907 15:16:40 -- target/dif.sh@54 -- # local file 00:18:10.907 15:16:40 -- target/dif.sh@56 -- # cat 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.907 15:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.907 15:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.907 { 00:18:10.907 "params": { 00:18:10.907 "name": "Nvme$subsystem", 00:18:10.907 "trtype": "$TEST_TRANSPORT", 00:18:10.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.907 "adrfam": "ipv4", 00:18:10.907 "trsvcid": "$NVMF_PORT", 00:18:10.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.907 "hdgst": ${hdgst:-false}, 00:18:10.907 "ddgst": ${ddgst:-false} 00:18:10.907 }, 00:18:10.907 "method": "bdev_nvme_attach_controller" 00:18:10.907 } 00:18:10.907 EOF 00:18:10.907 )") 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:10.907 15:16:40 -- nvmf/common.sh@542 -- # cat 00:18:10.907 15:16:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:10.907 15:16:40 -- target/dif.sh@72 -- # (( file <= files )) 00:18:10.907 15:16:40 -- target/dif.sh@73 -- # cat 00:18:10.907 15:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.907 15:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.907 { 00:18:10.907 "params": { 00:18:10.907 "name": "Nvme$subsystem", 00:18:10.907 "trtype": "$TEST_TRANSPORT", 00:18:10.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.907 "adrfam": "ipv4", 00:18:10.907 "trsvcid": "$NVMF_PORT", 00:18:10.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.907 "hdgst": ${hdgst:-false}, 00:18:10.907 "ddgst": ${ddgst:-false} 00:18:10.907 }, 00:18:10.907 "method": "bdev_nvme_attach_controller" 00:18:10.907 } 00:18:10.907 EOF 00:18:10.907 )") 00:18:10.907 15:16:40 -- target/dif.sh@72 -- # (( file++ )) 00:18:10.907 15:16:40 -- target/dif.sh@72 -- # (( file <= files )) 00:18:10.907 15:16:40 -- target/dif.sh@73 -- # cat 00:18:10.907 15:16:40 -- nvmf/common.sh@542 -- # cat 00:18:10.907 15:16:40 -- target/dif.sh@72 -- # (( file++ )) 00:18:10.907 15:16:40 -- target/dif.sh@72 -- # (( file <= files )) 00:18:10.907 15:16:40 -- nvmf/common.sh@544 -- # jq . 00:18:10.907 15:16:40 -- nvmf/common.sh@545 -- # IFS=, 00:18:10.907 15:16:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:10.907 "params": { 00:18:10.907 "name": "Nvme0", 00:18:10.907 "trtype": "tcp", 00:18:10.907 "traddr": "10.0.0.2", 00:18:10.907 "adrfam": "ipv4", 00:18:10.907 "trsvcid": "4420", 00:18:10.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:10.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:10.907 "hdgst": false, 00:18:10.907 "ddgst": false 00:18:10.907 }, 00:18:10.907 "method": "bdev_nvme_attach_controller" 00:18:10.907 },{ 00:18:10.907 "params": { 00:18:10.907 "name": "Nvme1", 00:18:10.907 "trtype": "tcp", 00:18:10.907 "traddr": "10.0.0.2", 00:18:10.907 "adrfam": "ipv4", 00:18:10.907 "trsvcid": "4420", 00:18:10.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.907 "hdgst": false, 00:18:10.907 "ddgst": false 00:18:10.907 }, 00:18:10.907 "method": "bdev_nvme_attach_controller" 00:18:10.907 },{ 00:18:10.907 "params": { 00:18:10.907 "name": "Nvme2", 00:18:10.907 "trtype": "tcp", 00:18:10.907 "traddr": "10.0.0.2", 00:18:10.907 "adrfam": "ipv4", 00:18:10.907 "trsvcid": "4420", 00:18:10.907 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.907 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:10.907 "hdgst": false, 00:18:10.907 "ddgst": false 00:18:10.907 }, 00:18:10.907 "method": "bdev_nvme_attach_controller" 00:18:10.907 }' 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:10.907 15:16:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:10.907 15:16:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:10.907 15:16:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:10.907 15:16:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:10.907 15:16:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:10.907 15:16:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:11.166 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:11.166 ... 00:18:11.166 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:11.166 ... 00:18:11.166 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:11.166 ... 00:18:11.166 fio-3.35 00:18:11.166 Starting 24 threads 00:18:11.732 [2024-11-06 15:16:40.700841] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:11.732 [2024-11-06 15:16:40.700931] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:21.702 00:18:21.702 filename0: (groupid=0, jobs=1): err= 0: pid=75035: Wed Nov 6 15:16:50 2024 00:18:21.702 read: IOPS=215, BW=862KiB/s (883kB/s)(8628KiB/10010msec) 00:18:21.702 slat (usec): min=4, max=8028, avg=25.66, stdev=298.63 00:18:21.702 clat (msec): min=11, max=143, avg=74.13, stdev=21.22 00:18:21.702 lat (msec): min=11, max=143, avg=74.16, stdev=21.22 00:18:21.702 clat percentiles (msec): 00:18:21.702 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:18:21.702 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:18:21.702 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:18:21.702 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:18:21.702 | 99.99th=[ 144] 00:18:21.702 bw ( KiB/s): min= 632, max= 1000, per=4.06%, avg=845.11, stdev=135.77, samples=19 00:18:21.702 iops : min= 158, max= 250, avg=211.26, stdev=33.95, samples=19 00:18:21.702 lat (msec) : 20=0.74%, 50=15.62%, 100=68.71%, 250=14.93% 00:18:21.702 cpu : usr=31.46%, sys=1.77%, ctx=898, majf=0, minf=9 00:18:21.702 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:21.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.702 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.702 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.702 filename0: (groupid=0, jobs=1): err= 0: pid=75036: Wed Nov 6 15:16:50 2024 00:18:21.702 read: IOPS=227, BW=910KiB/s (932kB/s)(9100KiB/10002msec) 00:18:21.702 slat (usec): min=4, max=8033, avg=28.55, stdev=335.82 00:18:21.702 clat (usec): min=1039, max=132493, avg=70216.21, stdev=25412.11 00:18:21.702 lat (usec): min=1046, max=132505, avg=70244.77, stdev=25415.56 00:18:21.702 clat percentiles (usec): 00:18:21.702 | 1.00th=[ 1483], 5.00th=[ 27919], 10.00th=[ 47449], 20.00th=[ 47973], 00:18:21.702 | 30.00th=[ 60031], 40.00th=[ 68682], 50.00th=[ 71828], 60.00th=[ 71828], 00:18:21.702 | 70.00th=[ 74974], 80.00th=[ 95945], 90.00th=[107480], 95.00th=[110625], 00:18:21.702 | 99.00th=[120062], 99.50th=[121111], 99.90th=[131597], 99.95th=[132645], 00:18:21.702 | 99.99th=[132645] 00:18:21.702 bw ( KiB/s): min= 696, max= 1048, per=4.12%, avg=858.11, stdev=134.89, samples=19 00:18:21.702 iops : min= 174, max= 262, avg=214.53, stdev=33.72, samples=19 00:18:21.702 lat (msec) : 2=1.41%, 4=1.71%, 10=0.97%, 20=0.57%, 50=18.73% 00:18:21.702 lat (msec) : 100=62.20%, 250=14.42% 00:18:21.702 cpu : usr=31.67%, sys=1.65%, ctx=866, majf=0, minf=9 00:18:21.702 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:21.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.702 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.702 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.702 filename0: (groupid=0, jobs=1): err= 0: pid=75037: Wed Nov 6 15:16:50 2024 00:18:21.702 read: IOPS=220, BW=881KiB/s (902kB/s)(8816KiB/10003msec) 00:18:21.702 slat (usec): min=3, max=8028, avg=33.39, stdev=343.63 00:18:21.702 clat (msec): min=3, max=152, avg=72.45, stdev=23.66 00:18:21.702 lat (msec): min=3, max=152, avg=72.48, stdev=23.65 00:18:21.702 clat percentiles (msec): 00:18:21.702 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 51], 00:18:21.702 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.702 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 112], 00:18:21.702 | 99.00th=[ 121], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 153], 00:18:21.702 | 99.99th=[ 153] 00:18:21.702 bw ( KiB/s): min= 640, max= 1024, per=4.08%, avg=848.42, stdev=141.12, samples=19 00:18:21.702 iops : min= 160, max= 256, avg=212.11, stdev=35.28, samples=19 00:18:21.702 lat (msec) : 4=1.00%, 10=0.91%, 20=0.54%, 50=17.60%, 100=65.20% 00:18:21.702 lat (msec) : 250=14.75% 00:18:21.702 cpu : usr=36.57%, sys=2.07%, ctx=1090, majf=0, minf=9 00:18:21.702 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:21.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.702 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.702 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.702 filename0: (groupid=0, jobs=1): err= 0: pid=75038: Wed Nov 6 15:16:50 2024 00:18:21.702 read: IOPS=221, BW=886KiB/s (907kB/s)(8892KiB/10038msec) 00:18:21.702 slat (usec): min=6, max=4026, avg=18.21, stdev=120.36 00:18:21.702 clat (msec): min=13, max=145, avg=72.06, stdev=22.93 00:18:21.702 lat (msec): min=13, max=145, avg=72.08, stdev=22.93 00:18:21.702 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52], 00:18:21.703 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:18:21.703 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 111], 00:18:21.703 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 144], 00:18:21.703 | 99.99th=[ 146] 00:18:21.703 bw ( KiB/s): min= 640, max= 1496, per=4.25%, avg=885.60, stdev=194.41, samples=20 00:18:21.703 iops : min= 160, max= 374, avg=221.40, stdev=48.60, samples=20 00:18:21.703 lat (msec) : 20=2.16%, 50=16.73%, 100=63.83%, 250=17.27% 00:18:21.703 cpu : usr=42.74%, sys=2.33%, ctx=1265, majf=0, minf=9 00:18:21.703 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename0: (groupid=0, jobs=1): err= 0: pid=75039: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=220, BW=883KiB/s (904kB/s)(8836KiB/10007msec) 00:18:21.703 slat (usec): min=4, max=4023, avg=19.06, stdev=112.32 00:18:21.703 clat (msec): min=7, max=139, avg=72.37, stdev=22.05 00:18:21.703 lat (msec): min=7, max=139, avg=72.39, stdev=22.05 00:18:21.703 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 23], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:18:21.703 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 73], 00:18:21.703 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 112], 00:18:21.703 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 138], 99.95th=[ 140], 00:18:21.703 | 99.99th=[ 140] 00:18:21.703 bw ( KiB/s): min= 704, max= 1080, per=4.15%, avg=864.42, stdev=141.50, samples=19 00:18:21.703 iops : min= 176, max= 270, avg=216.11, stdev=35.37, samples=19 00:18:21.703 lat (msec) : 10=0.41%, 20=0.59%, 50=17.47%, 100=66.41%, 250=15.12% 00:18:21.703 cpu : usr=40.69%, sys=2.06%, ctx=1275, majf=0, minf=9 00:18:21.703 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename0: (groupid=0, jobs=1): err= 0: pid=75040: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=223, BW=893KiB/s (915kB/s)(8940KiB/10008msec) 00:18:21.703 slat (usec): min=3, max=8028, avg=25.37, stdev=254.19 00:18:21.703 clat (msec): min=10, max=164, avg=71.51, stdev=22.47 00:18:21.703 lat (msec): min=10, max=164, avg=71.54, stdev=22.47 00:18:21.703 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 23], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:18:21.703 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:18:21.703 | 70.00th=[ 78], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 110], 00:18:21.703 | 99.00th=[ 122], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 165], 00:18:21.703 | 99.99th=[ 165] 00:18:21.703 bw ( KiB/s): min= 688, max= 1072, per=4.21%, avg=876.21, stdev=141.89, samples=19 00:18:21.703 iops : min= 172, max= 268, avg=219.05, stdev=35.47, samples=19 00:18:21.703 lat (msec) : 20=0.89%, 50=20.54%, 100=64.30%, 250=14.27% 00:18:21.703 cpu : usr=43.68%, sys=2.47%, ctx=966, majf=0, minf=9 00:18:21.703 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename0: (groupid=0, jobs=1): err= 0: pid=75041: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=197, BW=792KiB/s (811kB/s)(7952KiB/10042msec) 00:18:21.703 slat (usec): min=4, max=6518, avg=20.32, stdev=192.94 00:18:21.703 clat (msec): min=7, max=145, avg=80.58, stdev=24.57 00:18:21.703 lat (msec): min=7, max=145, avg=80.60, stdev=24.58 00:18:21.703 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 62], 00:18:21.703 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 88], 00:18:21.703 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 114], 00:18:21.703 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:18:21.703 | 99.99th=[ 146] 00:18:21.703 bw ( KiB/s): min= 528, max= 1408, per=3.79%, avg=788.80, stdev=205.61, samples=20 00:18:21.703 iops : min= 132, max= 352, avg=197.20, stdev=51.40, samples=20 00:18:21.703 lat (msec) : 10=0.80%, 20=2.41%, 50=6.44%, 100=64.13%, 250=26.21% 00:18:21.703 cpu : usr=43.84%, sys=2.36%, ctx=1356, majf=0, minf=9 00:18:21.703 IO depths : 1=0.2%, 2=3.9%, 4=15.3%, 8=66.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=91.7%, 8=4.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename0: (groupid=0, jobs=1): err= 0: pid=75042: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=217, BW=870KiB/s (891kB/s)(8712KiB/10013msec) 00:18:21.703 slat (usec): min=4, max=8026, avg=28.56, stdev=285.71 00:18:21.703 clat (msec): min=13, max=134, avg=73.40, stdev=21.10 00:18:21.703 lat (msec): min=13, max=134, avg=73.43, stdev=21.10 00:18:21.703 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 54], 00:18:21.703 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:18:21.703 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 111], 00:18:21.703 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 128], 99.95th=[ 134], 00:18:21.703 | 99.99th=[ 136] 00:18:21.703 bw ( KiB/s): min= 664, max= 1096, per=4.17%, avg=867.20, stdev=143.48, samples=20 00:18:21.703 iops : min= 166, max= 274, avg=216.75, stdev=35.83, samples=20 00:18:21.703 lat (msec) : 20=0.28%, 50=16.35%, 100=68.46%, 250=14.92% 00:18:21.703 cpu : usr=39.94%, sys=1.81%, ctx=1319, majf=0, minf=9 00:18:21.703 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename1: (groupid=0, jobs=1): err= 0: pid=75043: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=222, BW=888KiB/s (910kB/s)(8896KiB/10013msec) 00:18:21.703 slat (usec): min=3, max=8024, avg=20.96, stdev=186.91 00:18:21.703 clat (msec): min=11, max=144, avg=71.93, stdev=21.71 00:18:21.703 lat (msec): min=11, max=144, avg=71.95, stdev=21.71 00:18:21.703 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:18:21.703 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:18:21.703 | 70.00th=[ 80], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 111], 00:18:21.703 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:18:21.703 | 99.99th=[ 144] 00:18:21.703 bw ( KiB/s): min= 688, max= 1024, per=4.18%, avg=870.00, stdev=130.56, samples=19 00:18:21.703 iops : min= 172, max= 256, avg=217.47, stdev=32.67, samples=19 00:18:21.703 lat (msec) : 20=0.58%, 50=19.33%, 100=64.66%, 250=15.42% 00:18:21.703 cpu : usr=41.41%, sys=2.45%, ctx=1220, majf=0, minf=9 00:18:21.703 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename1: (groupid=0, jobs=1): err= 0: pid=75044: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=224, BW=898KiB/s (919kB/s)(9028KiB/10057msec) 00:18:21.703 slat (usec): min=5, max=8029, avg=23.09, stdev=253.89 00:18:21.703 clat (usec): min=1524, max=144979, avg=71119.91, stdev=28454.93 00:18:21.703 lat (usec): min=1532, max=144991, avg=71143.00, stdev=28456.94 00:18:21.703 clat percentiles (usec): 00:18:21.703 | 1.00th=[ 1631], 5.00th=[ 4359], 10.00th=[ 40633], 20.00th=[ 50594], 00:18:21.703 | 30.00th=[ 60556], 40.00th=[ 69731], 50.00th=[ 71828], 60.00th=[ 72877], 00:18:21.703 | 70.00th=[ 84411], 80.00th=[ 95945], 90.00th=[107480], 95.00th=[109577], 00:18:21.703 | 99.00th=[120062], 99.50th=[120062], 99.90th=[139461], 99.95th=[143655], 00:18:21.703 | 99.99th=[145753] 00:18:21.703 bw ( KiB/s): min= 608, max= 2320, per=4.31%, avg=896.40, stdev=362.24, samples=20 00:18:21.703 iops : min= 152, max= 580, avg=224.10, stdev=90.56, samples=20 00:18:21.703 lat (msec) : 2=4.16%, 4=0.80%, 10=1.51%, 20=2.04%, 50=11.12% 00:18:21.703 lat (msec) : 100=62.65%, 250=17.72% 00:18:21.703 cpu : usr=37.10%, sys=1.86%, ctx=1381, majf=0, minf=0 00:18:21.703 IO depths : 1=0.3%, 2=1.2%, 4=3.5%, 8=78.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 issued rwts: total=2257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.703 filename1: (groupid=0, jobs=1): err= 0: pid=75045: Wed Nov 6 15:16:50 2024 00:18:21.703 read: IOPS=217, BW=868KiB/s (889kB/s)(8724KiB/10048msec) 00:18:21.703 slat (usec): min=4, max=3671, avg=16.51, stdev=78.51 00:18:21.703 clat (msec): min=5, max=141, avg=73.55, stdev=23.46 00:18:21.703 lat (msec): min=5, max=141, avg=73.57, stdev=23.46 00:18:21.703 clat percentiles (msec): 00:18:21.703 | 1.00th=[ 11], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:18:21.703 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:18:21.703 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:18:21.703 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 136], 00:18:21.703 | 99.99th=[ 142] 00:18:21.703 bw ( KiB/s): min= 656, max= 1523, per=4.16%, avg=865.35, stdev=195.37, samples=20 00:18:21.703 iops : min= 164, max= 380, avg=216.30, stdev=48.71, samples=20 00:18:21.703 lat (msec) : 10=0.83%, 20=2.84%, 50=10.59%, 100=68.27%, 250=17.47% 00:18:21.703 cpu : usr=43.26%, sys=2.37%, ctx=1540, majf=0, minf=9 00:18:21.703 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.703 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename1: (groupid=0, jobs=1): err= 0: pid=75046: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=216, BW=868KiB/s (888kB/s)(8688KiB/10014msec) 00:18:21.704 slat (usec): min=4, max=8026, avg=22.56, stdev=243.08 00:18:21.704 clat (msec): min=15, max=143, avg=73.66, stdev=21.33 00:18:21.704 lat (msec): min=15, max=143, avg=73.68, stdev=21.32 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:18:21.704 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.704 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:18:21.704 | 99.99th=[ 144] 00:18:21.704 bw ( KiB/s): min= 616, max= 1128, per=4.15%, avg=864.70, stdev=141.87, samples=20 00:18:21.704 iops : min= 154, max= 282, avg=216.15, stdev=35.45, samples=20 00:18:21.704 lat (msec) : 20=0.46%, 50=17.22%, 100=67.82%, 250=14.50% 00:18:21.704 cpu : usr=31.48%, sys=1.74%, ctx=878, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename1: (groupid=0, jobs=1): err= 0: pid=75047: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=215, BW=862KiB/s (883kB/s)(8652KiB/10034msec) 00:18:21.704 slat (usec): min=5, max=9023, avg=30.16, stdev=355.45 00:18:21.704 clat (msec): min=12, max=143, avg=74.02, stdev=21.97 00:18:21.704 lat (msec): min=12, max=143, avg=74.05, stdev=21.97 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:18:21.704 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.704 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:18:21.704 | 99.99th=[ 144] 00:18:21.704 bw ( KiB/s): min= 616, max= 1400, per=4.14%, avg=861.25, stdev=171.49, samples=20 00:18:21.704 iops : min= 154, max= 350, avg=215.30, stdev=42.88, samples=20 00:18:21.704 lat (msec) : 20=2.22%, 50=13.27%, 100=69.58%, 250=14.93% 00:18:21.704 cpu : usr=32.35%, sys=1.58%, ctx=876, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename1: (groupid=0, jobs=1): err= 0: pid=75048: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=213, BW=853KiB/s (874kB/s)(8560KiB/10034msec) 00:18:21.704 slat (nsec): min=4453, max=79984, avg=14212.22, stdev=4923.97 00:18:21.704 clat (msec): min=7, max=143, avg=74.88, stdev=23.15 00:18:21.704 lat (msec): min=7, max=143, avg=74.89, stdev=23.15 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 15], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 56], 00:18:21.704 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:18:21.704 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 109], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 144], 00:18:21.704 | 99.99th=[ 144] 00:18:21.704 bw ( KiB/s): min= 608, max= 1416, per=4.09%, avg=851.70, stdev=194.82, samples=20 00:18:21.704 iops : min= 152, max= 354, avg=212.90, stdev=48.73, samples=20 00:18:21.704 lat (msec) : 10=0.09%, 20=2.06%, 50=13.88%, 100=65.05%, 250=18.93% 00:18:21.704 cpu : usr=36.06%, sys=1.90%, ctx=1137, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=81.9%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename1: (groupid=0, jobs=1): err= 0: pid=75049: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=217, BW=869KiB/s (890kB/s)(8716KiB/10026msec) 00:18:21.704 slat (usec): min=8, max=8026, avg=29.71, stdev=342.94 00:18:21.704 clat (msec): min=6, max=145, avg=73.43, stdev=22.19 00:18:21.704 lat (msec): min=6, max=145, avg=73.46, stdev=22.19 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:18:21.704 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:18:21.704 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:18:21.704 | 99.99th=[ 146] 00:18:21.704 bw ( KiB/s): min= 640, max= 1344, per=4.17%, avg=868.00, stdev=171.67, samples=20 00:18:21.704 iops : min= 160, max= 336, avg=217.00, stdev=42.92, samples=20 00:18:21.704 lat (msec) : 10=0.09%, 20=1.28%, 50=17.62%, 100=65.40%, 250=15.60% 00:18:21.704 cpu : usr=31.71%, sys=1.58%, ctx=869, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename1: (groupid=0, jobs=1): err= 0: pid=75050: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=218, BW=874KiB/s (895kB/s)(8748KiB/10011msec) 00:18:21.704 slat (usec): min=4, max=8031, avg=21.51, stdev=242.36 00:18:21.704 clat (msec): min=12, max=138, avg=73.09, stdev=21.22 00:18:21.704 lat (msec): min=12, max=138, avg=73.11, stdev=21.23 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:18:21.704 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:18:21.704 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:18:21.704 | 99.99th=[ 140] 00:18:21.704 bw ( KiB/s): min= 680, max= 1024, per=4.12%, avg=857.26, stdev=131.95, samples=19 00:18:21.704 iops : min= 170, max= 256, avg=214.32, stdev=32.99, samples=19 00:18:21.704 lat (msec) : 20=0.41%, 50=17.97%, 100=67.40%, 250=14.22% 00:18:21.704 cpu : usr=31.54%, sys=1.66%, ctx=878, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename2: (groupid=0, jobs=1): err= 0: pid=75051: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=214, BW=857KiB/s (877kB/s)(8576KiB/10012msec) 00:18:21.704 slat (usec): min=4, max=8030, avg=26.29, stdev=246.43 00:18:21.704 clat (msec): min=13, max=136, avg=74.56, stdev=21.94 00:18:21.704 lat (msec): min=13, max=136, avg=74.59, stdev=21.93 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:18:21.704 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:18:21.704 | 70.00th=[ 84], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 112], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 138], 00:18:21.704 | 99.99th=[ 138] 00:18:21.704 bw ( KiB/s): min= 682, max= 1072, per=4.03%, avg=839.74, stdev=142.60, samples=19 00:18:21.704 iops : min= 170, max= 268, avg=209.89, stdev=35.69, samples=19 00:18:21.704 lat (msec) : 20=0.42%, 50=17.54%, 100=64.93%, 250=17.12% 00:18:21.704 cpu : usr=42.26%, sys=2.01%, ctx=1219, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename2: (groupid=0, jobs=1): err= 0: pid=75052: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=213, BW=853KiB/s (874kB/s)(8572KiB/10044msec) 00:18:21.704 slat (usec): min=3, max=8024, avg=23.22, stdev=227.40 00:18:21.704 clat (msec): min=5, max=145, avg=74.77, stdev=24.47 00:18:21.704 lat (msec): min=5, max=145, avg=74.79, stdev=24.47 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 57], 00:18:21.704 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 77], 00:18:21.704 | 70.00th=[ 83], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 112], 00:18:21.704 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:18:21.704 | 99.99th=[ 146] 00:18:21.704 bw ( KiB/s): min= 544, max= 1424, per=4.09%, avg=850.80, stdev=190.91, samples=20 00:18:21.704 iops : min= 136, max= 356, avg=212.70, stdev=47.73, samples=20 00:18:21.704 lat (msec) : 10=1.59%, 20=2.15%, 50=9.94%, 100=68.08%, 250=18.25% 00:18:21.704 cpu : usr=42.33%, sys=2.41%, ctx=1680, majf=0, minf=9 00:18:21.704 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:18:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.704 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.704 filename2: (groupid=0, jobs=1): err= 0: pid=75053: Wed Nov 6 15:16:50 2024 00:18:21.704 read: IOPS=219, BW=879KiB/s (901kB/s)(8800KiB/10006msec) 00:18:21.704 slat (usec): min=4, max=8038, avg=19.16, stdev=171.13 00:18:21.704 clat (msec): min=8, max=144, avg=72.69, stdev=22.93 00:18:21.704 lat (msec): min=8, max=144, avg=72.70, stdev=22.93 00:18:21.704 clat percentiles (msec): 00:18:21.704 | 1.00th=[ 16], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 50], 00:18:21.704 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:18:21.704 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:18:21.704 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:18:21.704 | 99.99th=[ 144] 00:18:21.704 bw ( KiB/s): min= 664, max= 1072, per=4.13%, avg=860.26, stdev=143.63, samples=19 00:18:21.704 iops : min= 166, max= 268, avg=215.05, stdev=35.92, samples=19 00:18:21.704 lat (msec) : 10=0.14%, 20=0.91%, 50=19.09%, 100=64.45%, 250=15.41% 00:18:21.705 cpu : usr=36.87%, sys=2.07%, ctx=1221, majf=0, minf=9 00:18:21.705 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:21.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.705 filename2: (groupid=0, jobs=1): err= 0: pid=75054: Wed Nov 6 15:16:50 2024 00:18:21.705 read: IOPS=213, BW=853KiB/s (873kB/s)(8540KiB/10016msec) 00:18:21.705 slat (usec): min=3, max=8025, avg=22.24, stdev=245.13 00:18:21.705 clat (msec): min=15, max=142, avg=74.92, stdev=21.32 00:18:21.705 lat (msec): min=15, max=142, avg=74.94, stdev=21.32 00:18:21.705 clat percentiles (msec): 00:18:21.705 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:18:21.705 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.705 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:18:21.705 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:18:21.705 | 99.99th=[ 142] 00:18:21.705 bw ( KiB/s): min= 592, max= 1072, per=4.08%, avg=849.60, stdev=139.62, samples=20 00:18:21.705 iops : min= 148, max= 268, avg=212.40, stdev=34.90, samples=20 00:18:21.705 lat (msec) : 20=0.28%, 50=17.80%, 100=64.87%, 250=17.05% 00:18:21.705 cpu : usr=32.25%, sys=1.62%, ctx=879, majf=0, minf=9 00:18:21.705 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:21.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.705 filename2: (groupid=0, jobs=1): err= 0: pid=75055: Wed Nov 6 15:16:50 2024 00:18:21.705 read: IOPS=217, BW=869KiB/s (889kB/s)(8692KiB/10007msec) 00:18:21.705 slat (nsec): min=4473, max=42970, avg=15373.13, stdev=5372.03 00:18:21.705 clat (msec): min=10, max=136, avg=73.60, stdev=22.20 00:18:21.705 lat (msec): min=10, max=136, avg=73.61, stdev=22.20 00:18:21.705 clat percentiles (msec): 00:18:21.705 | 1.00th=[ 23], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:18:21.705 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.705 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 113], 00:18:21.705 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 136], 00:18:21.705 | 99.99th=[ 136] 00:18:21.705 bw ( KiB/s): min= 640, max= 1056, per=4.07%, avg=847.63, stdev=139.62, samples=19 00:18:21.705 iops : min= 160, max= 264, avg=211.89, stdev=34.90, samples=19 00:18:21.705 lat (msec) : 20=0.87%, 50=16.38%, 100=66.96%, 250=15.78% 00:18:21.705 cpu : usr=37.53%, sys=2.01%, ctx=1118, majf=0, minf=9 00:18:21.705 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:18:21.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.705 filename2: (groupid=0, jobs=1): err= 0: pid=75056: Wed Nov 6 15:16:50 2024 00:18:21.705 read: IOPS=220, BW=881KiB/s (902kB/s)(8812KiB/10003msec) 00:18:21.705 slat (usec): min=3, max=8024, avg=26.08, stdev=266.71 00:18:21.705 clat (msec): min=2, max=131, avg=72.52, stdev=22.68 00:18:21.705 lat (msec): min=2, max=131, avg=72.55, stdev=22.68 00:18:21.705 clat percentiles (msec): 00:18:21.705 | 1.00th=[ 9], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 52], 00:18:21.705 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.705 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 112], 00:18:21.705 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 132], 00:18:21.705 | 99.99th=[ 132] 00:18:21.705 bw ( KiB/s): min= 648, max= 1024, per=4.10%, avg=853.79, stdev=138.18, samples=19 00:18:21.705 iops : min= 162, max= 256, avg=213.42, stdev=34.57, samples=19 00:18:21.705 lat (msec) : 4=0.41%, 10=0.86%, 20=0.45%, 50=17.16%, 100=66.50% 00:18:21.705 lat (msec) : 250=14.62% 00:18:21.705 cpu : usr=37.61%, sys=2.10%, ctx=1121, majf=0, minf=9 00:18:21.705 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:21.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.705 filename2: (groupid=0, jobs=1): err= 0: pid=75057: Wed Nov 6 15:16:50 2024 00:18:21.705 read: IOPS=216, BW=865KiB/s (886kB/s)(8684KiB/10041msec) 00:18:21.705 slat (usec): min=4, max=8024, avg=17.23, stdev=172.00 00:18:21.705 clat (msec): min=13, max=135, avg=73.85, stdev=22.45 00:18:21.705 lat (msec): min=13, max=135, avg=73.86, stdev=22.45 00:18:21.705 clat percentiles (msec): 00:18:21.705 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 58], 00:18:21.705 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:18:21.705 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:18:21.705 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 136], 00:18:21.705 | 99.99th=[ 136] 00:18:21.705 bw ( KiB/s): min= 608, max= 1394, per=4.14%, avg=862.10, stdev=173.19, samples=20 00:18:21.705 iops : min= 152, max= 348, avg=215.50, stdev=43.22, samples=20 00:18:21.705 lat (msec) : 20=2.12%, 50=14.79%, 100=68.26%, 250=14.83% 00:18:21.705 cpu : usr=31.71%, sys=1.64%, ctx=905, majf=0, minf=9 00:18:21.705 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:21.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.705 filename2: (groupid=0, jobs=1): err= 0: pid=75058: Wed Nov 6 15:16:50 2024 00:18:21.705 read: IOPS=216, BW=867KiB/s (888kB/s)(8692KiB/10026msec) 00:18:21.705 slat (usec): min=4, max=7036, avg=18.02, stdev=150.71 00:18:21.705 clat (msec): min=13, max=143, avg=73.68, stdev=21.74 00:18:21.705 lat (msec): min=13, max=143, avg=73.70, stdev=21.75 00:18:21.705 clat percentiles (msec): 00:18:21.705 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:18:21.705 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:18:21.705 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 111], 00:18:21.705 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 140], 00:18:21.705 | 99.99th=[ 144] 00:18:21.705 bw ( KiB/s): min= 656, max= 1064, per=4.16%, avg=865.20, stdev=149.78, samples=20 00:18:21.705 iops : min= 164, max= 266, avg=216.30, stdev=37.45, samples=20 00:18:21.705 lat (msec) : 20=1.29%, 50=14.82%, 100=67.97%, 250=15.92% 00:18:21.705 cpu : usr=39.10%, sys=1.99%, ctx=1200, majf=0, minf=9 00:18:21.705 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:21.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.705 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:21.705 00:18:21.705 Run status group 0 (all jobs): 00:18:21.705 READ: bw=20.3MiB/s (21.3MB/s), 792KiB/s-910KiB/s (811kB/s-932kB/s), io=204MiB (214MB), run=10002-10057msec 00:18:21.964 15:16:51 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:21.964 15:16:51 -- target/dif.sh@43 -- # local sub 00:18:21.964 15:16:51 -- target/dif.sh@45 -- # for sub in "$@" 00:18:21.964 15:16:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:21.964 15:16:51 -- target/dif.sh@36 -- # local sub_id=0 00:18:21.964 15:16:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:21.964 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.964 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.964 15:16:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:21.964 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.964 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.964 15:16:51 -- target/dif.sh@45 -- # for sub in "$@" 00:18:21.964 15:16:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:21.964 15:16:51 -- target/dif.sh@36 -- # local sub_id=1 00:18:21.964 15:16:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.964 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.964 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.964 15:16:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:21.964 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.964 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.964 15:16:51 -- target/dif.sh@45 -- # for sub in "$@" 00:18:21.964 15:16:51 -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:21.964 15:16:51 -- target/dif.sh@36 -- # local sub_id=2 00:18:21.964 15:16:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:21.964 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.964 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.964 15:16:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:21.964 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.964 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.964 15:16:51 -- target/dif.sh@115 -- # NULL_DIF=1 00:18:21.964 15:16:51 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:21.964 15:16:51 -- target/dif.sh@115 -- # numjobs=2 00:18:21.964 15:16:51 -- target/dif.sh@115 -- # iodepth=8 00:18:21.964 15:16:51 -- target/dif.sh@115 -- # runtime=5 00:18:21.964 15:16:51 -- target/dif.sh@115 -- # files=1 00:18:21.965 15:16:51 -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:21.965 15:16:51 -- target/dif.sh@28 -- # local sub 00:18:21.965 15:16:51 -- target/dif.sh@30 -- # for sub in "$@" 00:18:21.965 15:16:51 -- target/dif.sh@31 -- # create_subsystem 0 00:18:21.965 15:16:51 -- target/dif.sh@18 -- # local sub_id=0 00:18:21.965 15:16:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 bdev_null0 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 [2024-11-06 15:16:51.202563] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@30 -- # for sub in "$@" 00:18:21.965 15:16:51 -- target/dif.sh@31 -- # create_subsystem 1 00:18:21.965 15:16:51 -- target/dif.sh@18 -- # local sub_id=1 00:18:21.965 15:16:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 bdev_null1 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.965 15:16:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.965 15:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.965 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 15:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.223 15:16:51 -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:22.224 15:16:51 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:22.224 15:16:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:22.224 15:16:51 -- nvmf/common.sh@520 -- # config=() 00:18:22.224 15:16:51 -- nvmf/common.sh@520 -- # local subsystem config 00:18:22.224 15:16:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:22.224 15:16:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:22.224 { 00:18:22.224 "params": { 00:18:22.224 "name": "Nvme$subsystem", 00:18:22.224 "trtype": "$TEST_TRANSPORT", 00:18:22.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.224 "adrfam": "ipv4", 00:18:22.224 "trsvcid": "$NVMF_PORT", 00:18:22.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.224 "hdgst": ${hdgst:-false}, 00:18:22.224 "ddgst": ${ddgst:-false} 00:18:22.224 }, 00:18:22.224 "method": "bdev_nvme_attach_controller" 00:18:22.224 } 00:18:22.224 EOF 00:18:22.224 )") 00:18:22.224 15:16:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:22.224 15:16:51 -- target/dif.sh@82 -- # gen_fio_conf 00:18:22.224 15:16:51 -- target/dif.sh@54 -- # local file 00:18:22.224 15:16:51 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:22.224 15:16:51 -- target/dif.sh@56 -- # cat 00:18:22.224 15:16:51 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:22.224 15:16:51 -- nvmf/common.sh@542 -- # cat 00:18:22.224 15:16:51 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:22.224 15:16:51 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:22.224 15:16:51 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:22.224 15:16:51 -- common/autotest_common.sh@1330 -- # shift 00:18:22.224 15:16:51 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:22.224 15:16:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.224 15:16:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:22.224 15:16:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:22.224 { 00:18:22.224 "params": { 00:18:22.224 "name": "Nvme$subsystem", 00:18:22.224 "trtype": "$TEST_TRANSPORT", 00:18:22.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.224 "adrfam": "ipv4", 00:18:22.224 "trsvcid": "$NVMF_PORT", 00:18:22.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.224 "hdgst": ${hdgst:-false}, 00:18:22.224 "ddgst": ${ddgst:-false} 00:18:22.224 }, 00:18:22.224 "method": "bdev_nvme_attach_controller" 00:18:22.224 } 00:18:22.224 EOF 00:18:22.224 )") 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:22.224 15:16:51 -- nvmf/common.sh@542 -- # cat 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:22.224 15:16:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:22.224 15:16:51 -- target/dif.sh@72 -- # (( file <= files )) 00:18:22.224 15:16:51 -- target/dif.sh@73 -- # cat 00:18:22.224 15:16:51 -- target/dif.sh@72 -- # (( file++ )) 00:18:22.224 15:16:51 -- target/dif.sh@72 -- # (( file <= files )) 00:18:22.224 15:16:51 -- nvmf/common.sh@544 -- # jq . 00:18:22.224 15:16:51 -- nvmf/common.sh@545 -- # IFS=, 00:18:22.224 15:16:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:22.224 "params": { 00:18:22.224 "name": "Nvme0", 00:18:22.224 "trtype": "tcp", 00:18:22.224 "traddr": "10.0.0.2", 00:18:22.224 "adrfam": "ipv4", 00:18:22.224 "trsvcid": "4420", 00:18:22.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:22.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:22.224 "hdgst": false, 00:18:22.224 "ddgst": false 00:18:22.224 }, 00:18:22.224 "method": "bdev_nvme_attach_controller" 00:18:22.224 },{ 00:18:22.224 "params": { 00:18:22.224 "name": "Nvme1", 00:18:22.224 "trtype": "tcp", 00:18:22.224 "traddr": "10.0.0.2", 00:18:22.224 "adrfam": "ipv4", 00:18:22.224 "trsvcid": "4420", 00:18:22.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.224 "hdgst": false, 00:18:22.224 "ddgst": false 00:18:22.224 }, 00:18:22.224 "method": "bdev_nvme_attach_controller" 00:18:22.224 }' 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:22.224 15:16:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:22.224 15:16:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:22.224 15:16:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:22.224 15:16:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:22.224 15:16:51 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:22.224 15:16:51 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:22.224 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:22.224 ... 00:18:22.224 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:22.224 ... 00:18:22.224 fio-3.35 00:18:22.224 Starting 4 threads 00:18:22.791 [2024-11-06 15:16:51.827040] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:22.791 [2024-11-06 15:16:51.827102] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:28.057 00:18:28.057 filename0: (groupid=0, jobs=1): err= 0: pid=75199: Wed Nov 6 15:16:56 2024 00:18:28.057 read: IOPS=1772, BW=13.9MiB/s (14.5MB/s)(69.3MiB/5004msec) 00:18:28.057 slat (nsec): min=6433, max=70249, avg=14567.71, stdev=5021.73 00:18:28.057 clat (usec): min=3987, max=5381, avg=4455.81, stdev=164.57 00:18:28.057 lat (usec): min=4000, max=5394, avg=4470.38, stdev=164.85 00:18:28.057 clat percentiles (usec): 00:18:28.057 | 1.00th=[ 4113], 5.00th=[ 4178], 10.00th=[ 4228], 20.00th=[ 4293], 00:18:28.057 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:18:28.057 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4686], 00:18:28.057 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5145], 99.95th=[ 5342], 00:18:28.057 | 99.99th=[ 5407] 00:18:28.057 bw ( KiB/s): min=13952, max=14720, per=21.06%, avg=14182.40, stdev=224.15, samples=10 00:18:28.057 iops : min= 1744, max= 1840, avg=1772.80, stdev=28.02, samples=10 00:18:28.057 lat (msec) : 4=0.02%, 10=99.98% 00:18:28.057 cpu : usr=92.04%, sys=7.14%, ctx=18, majf=0, minf=9 00:18:28.057 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.057 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.057 issued rwts: total=8872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:28.057 filename0: (groupid=0, jobs=1): err= 0: pid=75200: Wed Nov 6 15:16:56 2024 00:18:28.057 read: IOPS=2437, BW=19.0MiB/s (20.0MB/s)(95.2MiB/5002msec) 00:18:28.057 slat (nsec): min=7111, max=55864, avg=13948.81, stdev=4715.67 00:18:28.057 clat (usec): min=1635, max=7408, avg=3250.74, stdev=1058.73 00:18:28.057 lat (usec): min=1643, max=7431, avg=3264.69, stdev=1058.65 00:18:28.057 clat percentiles (usec): 00:18:28.057 | 1.00th=[ 1745], 5.00th=[ 1811], 10.00th=[ 1860], 20.00th=[ 1958], 00:18:28.057 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 4146], 00:18:28.057 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:18:28.057 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 4948], 00:18:28.057 | 99.99th=[ 5014] 00:18:28.057 bw ( KiB/s): min=19088, max=19824, per=28.85%, avg=19427.56, stdev=202.19, samples=9 00:18:28.057 iops : min= 2386, max= 2478, avg=2428.44, stdev=25.27, samples=9 00:18:28.057 lat (msec) : 2=24.68%, 4=30.50%, 10=44.82% 00:18:28.057 cpu : usr=91.40%, sys=7.52%, ctx=45, majf=0, minf=9 00:18:28.057 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.057 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.057 issued rwts: total=12190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:28.057 filename1: (groupid=0, jobs=1): err= 0: pid=75201: Wed Nov 6 15:16:56 2024 00:18:28.057 read: IOPS=1773, BW=13.9MiB/s (14.5MB/s)(69.3MiB/5002msec) 00:18:28.057 slat (nsec): min=6815, max=70820, avg=14587.63, stdev=5388.53 00:18:28.057 clat (usec): min=1898, max=5356, avg=4449.69, stdev=179.32 00:18:28.057 lat (usec): min=1909, max=5393, avg=4464.28, stdev=180.13 00:18:28.057 clat percentiles (usec): 00:18:28.057 | 1.00th=[ 4113], 5.00th=[ 4178], 10.00th=[ 4228], 20.00th=[ 4293], 00:18:28.057 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:18:28.057 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4686], 00:18:28.057 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5145], 99.95th=[ 5276], 00:18:28.057 | 99.99th=[ 5342] 00:18:28.057 bw ( KiB/s): min=13952, max=14336, per=20.99%, avg=14136.89, stdev=130.01, samples=9 00:18:28.057 iops : min= 1744, max= 1792, avg=1767.11, stdev=16.25, samples=9 00:18:28.057 lat (msec) : 2=0.03%, 4=0.06%, 10=99.91% 00:18:28.057 cpu : usr=91.84%, sys=7.34%, ctx=4, majf=0, minf=0 00:18:28.057 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.057 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.057 issued rwts: total=8872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:28.057 filename1: (groupid=0, jobs=1): err= 0: pid=75202: Wed Nov 6 15:16:56 2024 00:18:28.057 read: IOPS=2437, BW=19.0MiB/s (20.0MB/s)(95.2MiB/5001msec) 00:18:28.057 slat (nsec): min=6944, max=74876, avg=14679.95, stdev=5377.02 00:18:28.057 clat (usec): min=1631, max=6606, avg=3245.95, stdev=1056.30 00:18:28.057 lat (usec): min=1644, max=6631, avg=3260.63, stdev=1054.94 00:18:28.057 clat percentiles (usec): 00:18:28.057 | 1.00th=[ 1745], 5.00th=[ 1811], 10.00th=[ 1860], 20.00th=[ 1958], 00:18:28.057 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 4146], 00:18:28.057 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:18:28.057 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 4948], 00:18:28.057 | 99.99th=[ 5080] 00:18:28.057 bw ( KiB/s): min=19158, max=19824, per=28.86%, avg=19435.33, stdev=188.37, samples=9 00:18:28.057 iops : min= 2394, max= 2478, avg=2429.33, stdev=23.69, samples=9 00:18:28.058 lat (msec) : 2=24.70%, 4=30.66%, 10=44.64% 00:18:28.058 cpu : usr=91.86%, sys=7.04%, ctx=22, majf=0, minf=9 00:18:28.058 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.058 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.058 issued rwts: total=12192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:28.058 00:18:28.058 Run status group 0 (all jobs): 00:18:28.058 READ: bw=65.8MiB/s (69.0MB/s), 13.9MiB/s-19.0MiB/s (14.5MB/s-20.0MB/s), io=329MiB (345MB), run=5001-5004msec 00:18:28.058 15:16:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:18:28.058 15:16:57 -- target/dif.sh@43 -- # local sub 00:18:28.058 15:16:57 -- target/dif.sh@45 -- # for sub in "$@" 00:18:28.058 15:16:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:28.058 15:16:57 -- target/dif.sh@36 -- # local sub_id=0 00:18:28.058 15:16:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@45 -- # for sub in "$@" 00:18:28.058 15:16:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:28.058 15:16:57 -- target/dif.sh@36 -- # local sub_id=1 00:18:28.058 15:16:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 ************************************ 00:18:28.058 END TEST fio_dif_rand_params 00:18:28.058 ************************************ 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 00:18:28.058 real 0m23.167s 00:18:28.058 user 2m3.999s 00:18:28.058 sys 0m8.075s 00:18:28.058 15:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 15:16:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:18:28.058 15:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:28.058 15:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 ************************************ 00:18:28.058 START TEST fio_dif_digest 00:18:28.058 ************************************ 00:18:28.058 15:16:57 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:18:28.058 15:16:57 -- target/dif.sh@123 -- # local NULL_DIF 00:18:28.058 15:16:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:18:28.058 15:16:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:18:28.058 15:16:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:18:28.058 15:16:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:18:28.058 15:16:57 -- target/dif.sh@127 -- # numjobs=3 00:18:28.058 15:16:57 -- target/dif.sh@127 -- # iodepth=3 00:18:28.058 15:16:57 -- target/dif.sh@127 -- # runtime=10 00:18:28.058 15:16:57 -- target/dif.sh@128 -- # hdgst=true 00:18:28.058 15:16:57 -- target/dif.sh@128 -- # ddgst=true 00:18:28.058 15:16:57 -- target/dif.sh@130 -- # create_subsystems 0 00:18:28.058 15:16:57 -- target/dif.sh@28 -- # local sub 00:18:28.058 15:16:57 -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.058 15:16:57 -- target/dif.sh@31 -- # create_subsystem 0 00:18:28.058 15:16:57 -- target/dif.sh@18 -- # local sub_id=0 00:18:28.058 15:16:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 bdev_null0 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.058 15:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.058 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:28.058 [2024-11-06 15:16:57.254956] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.058 15:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.058 15:16:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:18:28.058 15:16:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:18:28.058 15:16:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:28.058 15:16:57 -- nvmf/common.sh@520 -- # config=() 00:18:28.058 15:16:57 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.058 15:16:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.058 15:16:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.058 15:16:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.058 { 00:18:28.058 "params": { 00:18:28.058 "name": "Nvme$subsystem", 00:18:28.058 "trtype": "$TEST_TRANSPORT", 00:18:28.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.058 "adrfam": "ipv4", 00:18:28.058 "trsvcid": "$NVMF_PORT", 00:18:28.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.058 "hdgst": ${hdgst:-false}, 00:18:28.058 "ddgst": ${ddgst:-false} 00:18:28.058 }, 00:18:28.058 "method": "bdev_nvme_attach_controller" 00:18:28.058 } 00:18:28.058 EOF 00:18:28.058 )") 00:18:28.058 15:16:57 -- target/dif.sh@82 -- # gen_fio_conf 00:18:28.058 15:16:57 -- target/dif.sh@54 -- # local file 00:18:28.058 15:16:57 -- target/dif.sh@56 -- # cat 00:18:28.058 15:16:57 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.058 15:16:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:28.058 15:16:57 -- nvmf/common.sh@542 -- # cat 00:18:28.058 15:16:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.058 15:16:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:28.058 15:16:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.058 15:16:57 -- common/autotest_common.sh@1330 -- # shift 00:18:28.058 15:16:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:28.058 15:16:57 -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.058 15:16:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:28.058 15:16:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.058 15:16:57 -- nvmf/common.sh@544 -- # jq . 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:28.058 15:16:57 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.058 15:16:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.058 "params": { 00:18:28.058 "name": "Nvme0", 00:18:28.058 "trtype": "tcp", 00:18:28.058 "traddr": "10.0.0.2", 00:18:28.058 "adrfam": "ipv4", 00:18:28.058 "trsvcid": "4420", 00:18:28.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:28.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:28.058 "hdgst": true, 00:18:28.058 "ddgst": true 00:18:28.058 }, 00:18:28.058 "method": "bdev_nvme_attach_controller" 00:18:28.058 }' 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:28.058 15:16:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:28.058 15:16:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:28.058 15:16:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:28.317 15:16:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:28.317 15:16:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.317 15:16:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.317 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:28.317 ... 00:18:28.317 fio-3.35 00:18:28.317 Starting 3 threads 00:18:28.575 [2024-11-06 15:16:57.829180] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:28.575 [2024-11-06 15:16:57.829879] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:40.809 00:18:40.809 filename0: (groupid=0, jobs=1): err= 0: pid=75312: Wed Nov 6 15:17:07 2024 00:18:40.809 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10006msec) 00:18:40.809 slat (nsec): min=7205, max=60317, avg=16178.46, stdev=5322.21 00:18:40.809 clat (usec): min=11872, max=14569, avg=12908.40, stdev=493.48 00:18:40.809 lat (usec): min=11885, max=14604, avg=12924.58, stdev=493.81 00:18:40.809 clat percentiles (usec): 00:18:40.809 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:18:40.809 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:18:40.809 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:18:40.809 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:18:40.809 | 99.99th=[14615] 00:18:40.809 bw ( KiB/s): min=29184, max=30720, per=33.34%, avg=29669.05, stdev=525.30, samples=19 00:18:40.809 iops : min= 228, max= 240, avg=231.79, stdev= 4.10, samples=19 00:18:40.809 lat (msec) : 20=100.00% 00:18:40.809 cpu : usr=91.29%, sys=8.18%, ctx=11, majf=0, minf=9 00:18:40.809 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.809 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:40.809 filename0: (groupid=0, jobs=1): err= 0: pid=75313: Wed Nov 6 15:17:07 2024 00:18:40.809 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10004msec) 00:18:40.809 slat (nsec): min=7071, max=60561, avg=16348.40, stdev=5483.93 00:18:40.809 clat (usec): min=11883, max=14545, avg=12905.35, stdev=491.72 00:18:40.809 lat (usec): min=11896, max=14592, avg=12921.70, stdev=492.16 00:18:40.809 clat percentiles (usec): 00:18:40.809 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12387], 20.00th=[12518], 00:18:40.809 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:18:40.809 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:18:40.809 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:18:40.809 | 99.99th=[14484] 00:18:40.809 bw ( KiB/s): min=29184, max=30720, per=33.35%, avg=29672.21, stdev=527.27, samples=19 00:18:40.809 iops : min= 228, max= 240, avg=231.79, stdev= 4.10, samples=19 00:18:40.809 lat (msec) : 20=100.00% 00:18:40.809 cpu : usr=92.46%, sys=6.98%, ctx=6, majf=0, minf=0 00:18:40.809 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.809 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:40.809 filename0: (groupid=0, jobs=1): err= 0: pid=75314: Wed Nov 6 15:17:07 2024 00:18:40.809 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10008msec) 00:18:40.809 slat (nsec): min=6997, max=61727, avg=15338.44, stdev=5788.18 00:18:40.809 clat (usec): min=11879, max=15861, avg=12912.67, stdev=503.08 00:18:40.809 lat (usec): min=11892, max=15890, avg=12928.01, stdev=503.54 00:18:40.809 clat percentiles (usec): 00:18:40.809 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12387], 20.00th=[12518], 00:18:40.809 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:18:40.809 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13698], 00:18:40.809 | 99.00th=[14091], 99.50th=[14222], 99.90th=[15795], 99.95th=[15795], 00:18:40.809 | 99.99th=[15926] 00:18:40.809 bw ( KiB/s): min=29184, max=31488, per=33.34%, avg=29669.05, stdev=584.36, samples=19 00:18:40.809 iops : min= 228, max= 246, avg=231.79, stdev= 4.57, samples=19 00:18:40.809 lat (msec) : 20=100.00% 00:18:40.809 cpu : usr=92.22%, sys=7.21%, ctx=11, majf=0, minf=9 00:18:40.809 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.809 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:40.809 00:18:40.809 Run status group 0 (all jobs): 00:18:40.809 READ: bw=86.9MiB/s (91.1MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=870MiB (912MB), run=10004-10008msec 00:18:40.809 15:17:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:18:40.809 15:17:08 -- target/dif.sh@43 -- # local sub 00:18:40.809 15:17:08 -- target/dif.sh@45 -- # for sub in "$@" 00:18:40.809 15:17:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:40.809 15:17:08 -- target/dif.sh@36 -- # local sub_id=0 00:18:40.809 15:17:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:40.809 15:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.809 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:18:40.809 15:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.809 15:17:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:40.809 15:17:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.809 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:18:40.809 ************************************ 00:18:40.809 END TEST fio_dif_digest 00:18:40.809 ************************************ 00:18:40.809 15:17:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.809 00:18:40.809 real 0m10.935s 00:18:40.809 user 0m28.221s 00:18:40.809 sys 0m2.469s 00:18:40.809 15:17:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:40.809 15:17:08 -- common/autotest_common.sh@10 -- # set +x 00:18:40.809 15:17:08 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:40.809 15:17:08 -- target/dif.sh@147 -- # nvmftestfini 00:18:40.809 15:17:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:40.809 15:17:08 -- nvmf/common.sh@116 -- # sync 00:18:40.809 15:17:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:40.809 15:17:08 -- nvmf/common.sh@119 -- # set +e 00:18:40.809 15:17:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:40.809 15:17:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:40.809 rmmod nvme_tcp 00:18:40.809 rmmod nvme_fabrics 00:18:40.809 rmmod nvme_keyring 00:18:40.809 15:17:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.809 15:17:08 -- nvmf/common.sh@123 -- # set -e 00:18:40.809 15:17:08 -- nvmf/common.sh@124 -- # return 0 00:18:40.809 15:17:08 -- nvmf/common.sh@477 -- # '[' -n 74551 ']' 00:18:40.809 15:17:08 -- nvmf/common.sh@478 -- # killprocess 74551 00:18:40.809 15:17:08 -- common/autotest_common.sh@936 -- # '[' -z 74551 ']' 00:18:40.809 15:17:08 -- common/autotest_common.sh@940 -- # kill -0 74551 00:18:40.809 15:17:08 -- common/autotest_common.sh@941 -- # uname 00:18:40.809 15:17:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.809 15:17:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74551 00:18:40.809 killing process with pid 74551 00:18:40.809 15:17:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:40.809 15:17:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:40.809 15:17:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74551' 00:18:40.809 15:17:08 -- common/autotest_common.sh@955 -- # kill 74551 00:18:40.809 15:17:08 -- common/autotest_common.sh@960 -- # wait 74551 00:18:40.809 15:17:08 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:18:40.809 15:17:08 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:40.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:40.809 Waiting for block devices as requested 00:18:40.809 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:18:40.809 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:18:40.809 15:17:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.809 15:17:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.809 15:17:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.809 15:17:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.809 15:17:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.809 15:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:40.809 15:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.809 15:17:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:40.809 00:18:40.809 real 0m59.279s 00:18:40.809 user 3m47.699s 00:18:40.809 sys 0m18.922s 00:18:40.809 15:17:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:40.809 15:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:40.809 ************************************ 00:18:40.809 END TEST nvmf_dif 00:18:40.809 ************************************ 00:18:40.809 15:17:09 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:18:40.809 15:17:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:40.809 15:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:40.809 15:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:40.809 ************************************ 00:18:40.809 START TEST nvmf_abort_qd_sizes 00:18:40.810 ************************************ 00:18:40.810 15:17:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:18:40.810 * Looking for test storage... 00:18:40.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:40.810 15:17:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:40.810 15:17:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:40.810 15:17:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:40.810 15:17:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:40.810 15:17:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:40.810 15:17:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.810 15:17:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.810 15:17:09 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.810 15:17:09 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.810 15:17:09 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.810 15:17:09 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.810 15:17:09 -- scripts/common.sh@337 -- # local 'op=<' 00:18:40.810 15:17:09 -- scripts/common.sh@339 -- # ver1_l=2 00:18:40.810 15:17:09 -- scripts/common.sh@340 -- # ver2_l=1 00:18:40.810 15:17:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.810 15:17:09 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.810 15:17:09 -- scripts/common.sh@344 -- # : 1 00:18:40.810 15:17:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.810 15:17:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.810 15:17:09 -- scripts/common.sh@364 -- # decimal 1 00:18:40.810 15:17:09 -- scripts/common.sh@352 -- # local d=1 00:18:40.810 15:17:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.810 15:17:09 -- scripts/common.sh@354 -- # echo 1 00:18:40.810 15:17:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.810 15:17:09 -- scripts/common.sh@365 -- # decimal 2 00:18:40.810 15:17:09 -- scripts/common.sh@352 -- # local d=2 00:18:40.810 15:17:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.810 15:17:09 -- scripts/common.sh@354 -- # echo 2 00:18:40.810 15:17:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:40.810 15:17:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.810 15:17:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.810 15:17:09 -- scripts/common.sh@367 -- # return 0 00:18:40.810 15:17:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.810 15:17:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:40.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.810 --rc genhtml_branch_coverage=1 00:18:40.810 --rc genhtml_function_coverage=1 00:18:40.810 --rc genhtml_legend=1 00:18:40.810 --rc geninfo_all_blocks=1 00:18:40.810 --rc geninfo_unexecuted_blocks=1 00:18:40.810 00:18:40.810 ' 00:18:40.810 15:17:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:40.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.810 --rc genhtml_branch_coverage=1 00:18:40.810 --rc genhtml_function_coverage=1 00:18:40.810 --rc genhtml_legend=1 00:18:40.810 --rc geninfo_all_blocks=1 00:18:40.810 --rc geninfo_unexecuted_blocks=1 00:18:40.810 00:18:40.810 ' 00:18:40.810 15:17:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:40.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.810 --rc genhtml_branch_coverage=1 00:18:40.810 --rc genhtml_function_coverage=1 00:18:40.810 --rc genhtml_legend=1 00:18:40.810 --rc geninfo_all_blocks=1 00:18:40.810 --rc geninfo_unexecuted_blocks=1 00:18:40.810 00:18:40.810 ' 00:18:40.810 15:17:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:40.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.810 --rc genhtml_branch_coverage=1 00:18:40.810 --rc genhtml_function_coverage=1 00:18:40.810 --rc genhtml_legend=1 00:18:40.810 --rc geninfo_all_blocks=1 00:18:40.810 --rc geninfo_unexecuted_blocks=1 00:18:40.810 00:18:40.810 ' 00:18:40.810 15:17:09 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.810 15:17:09 -- nvmf/common.sh@7 -- # uname -s 00:18:40.810 15:17:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.810 15:17:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.810 15:17:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.810 15:17:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.810 15:17:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.810 15:17:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.810 15:17:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.810 15:17:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.810 15:17:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.810 15:17:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.810 15:17:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 00:18:40.810 15:17:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=819f6113-9743-44c3-be27-f14abf178c18 00:18:40.810 15:17:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.810 15:17:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.810 15:17:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.810 15:17:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.810 15:17:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.810 15:17:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.810 15:17:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.810 15:17:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.810 15:17:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.810 15:17:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.810 15:17:09 -- paths/export.sh@5 -- # export PATH 00:18:40.810 15:17:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.810 15:17:09 -- nvmf/common.sh@46 -- # : 0 00:18:40.810 15:17:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.810 15:17:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.810 15:17:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.810 15:17:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.810 15:17:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.810 15:17:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.810 15:17:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.810 15:17:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.810 15:17:09 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:18:40.810 15:17:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.810 15:17:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.810 15:17:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.810 15:17:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.810 15:17:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.810 15:17:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.810 15:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:40.810 15:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.810 15:17:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.810 15:17:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.810 15:17:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.810 15:17:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.810 15:17:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.810 15:17:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.810 15:17:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.810 15:17:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.810 15:17:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.810 15:17:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.810 15:17:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.810 15:17:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.810 15:17:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.810 15:17:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.810 15:17:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.810 15:17:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.810 15:17:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.810 15:17:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.810 15:17:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.810 15:17:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.810 Cannot find device "nvmf_tgt_br" 00:18:40.810 15:17:09 -- nvmf/common.sh@154 -- # true 00:18:40.810 15:17:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.810 Cannot find device "nvmf_tgt_br2" 00:18:40.810 15:17:09 -- nvmf/common.sh@155 -- # true 00:18:40.810 15:17:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:40.810 15:17:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:40.810 Cannot find device "nvmf_tgt_br" 00:18:40.810 15:17:09 -- nvmf/common.sh@157 -- # true 00:18:40.810 15:17:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:40.810 Cannot find device "nvmf_tgt_br2" 00:18:40.811 15:17:09 -- nvmf/common.sh@158 -- # true 00:18:40.811 15:17:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:40.811 15:17:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:40.811 15:17:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.811 15:17:09 -- nvmf/common.sh@161 -- # true 00:18:40.811 15:17:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.811 15:17:09 -- nvmf/common.sh@162 -- # true 00:18:40.811 15:17:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.811 15:17:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.811 15:17:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.811 15:17:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.811 15:17:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.811 15:17:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.811 15:17:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.811 15:17:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:40.811 15:17:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:40.811 15:17:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:40.811 15:17:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:40.811 15:17:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:40.811 15:17:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:40.811 15:17:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.811 15:17:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.811 15:17:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.811 15:17:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:40.811 15:17:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:40.811 15:17:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.811 15:17:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.811 15:17:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.811 15:17:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.811 15:17:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.811 15:17:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:40.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:40.811 00:18:40.811 --- 10.0.0.2 ping statistics --- 00:18:40.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.811 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:40.811 15:17:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:40.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:40.811 00:18:40.811 --- 10.0.0.3 ping statistics --- 00:18:40.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.811 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:40.811 15:17:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:40.811 00:18:40.811 --- 10.0.0.1 ping statistics --- 00:18:40.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.811 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:40.811 15:17:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.811 15:17:09 -- nvmf/common.sh@421 -- # return 0 00:18:40.811 15:17:09 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:40.811 15:17:09 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:41.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:41.329 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:18:41.329 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:18:41.329 15:17:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.329 15:17:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:41.329 15:17:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:41.329 15:17:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.329 15:17:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:41.329 15:17:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:41.329 15:17:10 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:18:41.329 15:17:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:41.329 15:17:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.329 15:17:10 -- common/autotest_common.sh@10 -- # set +x 00:18:41.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.329 15:17:10 -- nvmf/common.sh@469 -- # nvmfpid=75910 00:18:41.329 15:17:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:18:41.329 15:17:10 -- nvmf/common.sh@470 -- # waitforlisten 75910 00:18:41.329 15:17:10 -- common/autotest_common.sh@829 -- # '[' -z 75910 ']' 00:18:41.329 15:17:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.329 15:17:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.329 15:17:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.329 15:17:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.329 15:17:10 -- common/autotest_common.sh@10 -- # set +x 00:18:41.329 [2024-11-06 15:17:10.573285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:41.329 [2024-11-06 15:17:10.573578] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.587 [2024-11-06 15:17:10.719097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.587 [2024-11-06 15:17:10.790405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:41.587 [2024-11-06 15:17:10.790881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.587 [2024-11-06 15:17:10.791037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.587 [2024-11-06 15:17:10.791275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.587 [2024-11-06 15:17:10.791597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.587 [2024-11-06 15:17:10.791752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.587 [2024-11-06 15:17:10.791875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.587 [2024-11-06 15:17:10.791883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.522 15:17:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.522 15:17:11 -- common/autotest_common.sh@862 -- # return 0 00:18:42.522 15:17:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:42.522 15:17:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.522 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.522 15:17:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.522 15:17:11 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:18:42.522 15:17:11 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:18:42.522 15:17:11 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:18:42.522 15:17:11 -- scripts/common.sh@311 -- # local bdf bdfs 00:18:42.522 15:17:11 -- scripts/common.sh@312 -- # local nvmes 00:18:42.522 15:17:11 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:18:42.522 15:17:11 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:42.522 15:17:11 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:18:42.522 15:17:11 -- scripts/common.sh@297 -- # local bdf= 00:18:42.522 15:17:11 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:18:42.522 15:17:11 -- scripts/common.sh@232 -- # local class 00:18:42.522 15:17:11 -- scripts/common.sh@233 -- # local subclass 00:18:42.522 15:17:11 -- scripts/common.sh@234 -- # local progif 00:18:42.522 15:17:11 -- scripts/common.sh@235 -- # printf %02x 1 00:18:42.522 15:17:11 -- scripts/common.sh@235 -- # class=01 00:18:42.522 15:17:11 -- scripts/common.sh@236 -- # printf %02x 8 00:18:42.522 15:17:11 -- scripts/common.sh@236 -- # subclass=08 00:18:42.522 15:17:11 -- scripts/common.sh@237 -- # printf %02x 2 00:18:42.522 15:17:11 -- scripts/common.sh@237 -- # progif=02 00:18:42.522 15:17:11 -- scripts/common.sh@239 -- # hash lspci 00:18:42.522 15:17:11 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:18:42.522 15:17:11 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:18:42.522 15:17:11 -- scripts/common.sh@242 -- # grep -i -- -p02 00:18:42.522 15:17:11 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:42.522 15:17:11 -- scripts/common.sh@244 -- # tr -d '"' 00:18:42.522 15:17:11 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:42.522 15:17:11 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:18:42.522 15:17:11 -- scripts/common.sh@15 -- # local i 00:18:42.522 15:17:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:18:42.522 15:17:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:42.522 15:17:11 -- scripts/common.sh@24 -- # return 0 00:18:42.522 15:17:11 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:18:42.522 15:17:11 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:42.522 15:17:11 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:18:42.522 15:17:11 -- scripts/common.sh@15 -- # local i 00:18:42.522 15:17:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:18:42.522 15:17:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:42.522 15:17:11 -- scripts/common.sh@24 -- # return 0 00:18:42.522 15:17:11 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:18:42.522 15:17:11 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:42.522 15:17:11 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:18:42.522 15:17:11 -- scripts/common.sh@322 -- # uname -s 00:18:42.523 15:17:11 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:42.523 15:17:11 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:42.523 15:17:11 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:42.523 15:17:11 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:18:42.523 15:17:11 -- scripts/common.sh@322 -- # uname -s 00:18:42.523 15:17:11 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:42.523 15:17:11 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:42.523 15:17:11 -- scripts/common.sh@327 -- # (( 2 )) 00:18:42.523 15:17:11 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:18:42.523 15:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:42.523 15:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.523 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 ************************************ 00:18:42.523 START TEST spdk_target_abort 00:18:42.523 ************************************ 00:18:42.523 15:17:11 -- common/autotest_common.sh@1114 -- # spdk_target 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:18:42.523 15:17:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.523 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 spdk_targetn1 00:18:42.523 15:17:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.523 15:17:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.523 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 [2024-11-06 15:17:11.741655] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.523 15:17:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:18:42.523 15:17:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.523 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 15:17:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:18:42.523 15:17:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.523 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 15:17:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:18:42.523 15:17:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.523 15:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 [2024-11-06 15:17:11.769869] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.523 15:17:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@24 -- # local target r 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:42.523 15:17:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:45.806 Initializing NVMe Controllers 00:18:45.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:45.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:45.806 Initialization complete. Launching workers. 00:18:45.806 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10321, failed: 0 00:18:45.806 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1050, failed to submit 9271 00:18:45.806 success 841, unsuccess 209, failed 0 00:18:45.806 15:17:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:45.806 15:17:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:49.093 Initializing NVMe Controllers 00:18:49.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:49.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:49.093 Initialization complete. Launching workers. 00:18:49.093 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9033, failed: 0 00:18:49.093 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1193, failed to submit 7840 00:18:49.093 success 393, unsuccess 800, failed 0 00:18:49.093 15:17:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:49.094 15:17:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:52.375 Initializing NVMe Controllers 00:18:52.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:52.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:52.375 Initialization complete. Launching workers. 00:18:52.375 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31761, failed: 0 00:18:52.375 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2300, failed to submit 29461 00:18:52.375 success 450, unsuccess 1850, failed 0 00:18:52.375 15:17:21 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:18:52.375 15:17:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.375 15:17:21 -- common/autotest_common.sh@10 -- # set +x 00:18:52.375 15:17:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.375 15:17:21 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:18:52.375 15:17:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.375 15:17:21 -- common/autotest_common.sh@10 -- # set +x 00:18:52.634 15:17:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.634 15:17:21 -- target/abort_qd_sizes.sh@62 -- # killprocess 75910 00:18:52.634 15:17:21 -- common/autotest_common.sh@936 -- # '[' -z 75910 ']' 00:18:52.634 15:17:21 -- common/autotest_common.sh@940 -- # kill -0 75910 00:18:52.634 15:17:21 -- common/autotest_common.sh@941 -- # uname 00:18:52.634 15:17:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.634 15:17:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75910 00:18:52.634 killing process with pid 75910 00:18:52.634 15:17:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:52.634 15:17:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:52.634 15:17:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75910' 00:18:52.634 15:17:21 -- common/autotest_common.sh@955 -- # kill 75910 00:18:52.634 15:17:21 -- common/autotest_common.sh@960 -- # wait 75910 00:18:52.892 00:18:52.892 real 0m10.395s 00:18:52.892 user 0m42.527s 00:18:52.892 sys 0m1.994s 00:18:52.892 15:17:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:52.892 15:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.892 ************************************ 00:18:52.892 END TEST spdk_target_abort 00:18:52.892 ************************************ 00:18:52.892 15:17:22 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:18:52.892 15:17:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:52.892 15:17:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.892 15:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.892 ************************************ 00:18:52.892 START TEST kernel_target_abort 00:18:52.892 ************************************ 00:18:52.892 15:17:22 -- common/autotest_common.sh@1114 -- # kernel_target 00:18:52.892 15:17:22 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:18:52.892 15:17:22 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:18:52.892 15:17:22 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:18:52.892 15:17:22 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:18:52.892 15:17:22 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:18:52.892 15:17:22 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:52.892 15:17:22 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:52.892 15:17:22 -- nvmf/common.sh@627 -- # local block nvme 00:18:52.892 15:17:22 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:18:52.892 15:17:22 -- nvmf/common.sh@630 -- # modprobe nvmet 00:18:52.892 15:17:22 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:52.892 15:17:22 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:53.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:53.460 Waiting for block devices as requested 00:18:53.460 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.460 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.460 15:17:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:53.460 15:17:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:53.460 15:17:22 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:18:53.460 15:17:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:18:53.460 15:17:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:53.460 No valid GPT data, bailing 00:18:53.460 15:17:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:53.718 15:17:22 -- scripts/common.sh@393 -- # pt= 00:18:53.718 15:17:22 -- scripts/common.sh@394 -- # return 1 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:18:53.719 15:17:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:53.719 15:17:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:18:53.719 15:17:22 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:18:53.719 15:17:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:53.719 No valid GPT data, bailing 00:18:53.719 15:17:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:53.719 15:17:22 -- scripts/common.sh@393 -- # pt= 00:18:53.719 15:17:22 -- scripts/common.sh@394 -- # return 1 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:18:53.719 15:17:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:53.719 15:17:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:18:53.719 15:17:22 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:18:53.719 15:17:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:18:53.719 No valid GPT data, bailing 00:18:53.719 15:17:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:18:53.719 15:17:22 -- scripts/common.sh@393 -- # pt= 00:18:53.719 15:17:22 -- scripts/common.sh@394 -- # return 1 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:18:53.719 15:17:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:53.719 15:17:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:18:53.719 15:17:22 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:18:53.719 15:17:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:18:53.719 No valid GPT data, bailing 00:18:53.719 15:17:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:18:53.719 15:17:22 -- scripts/common.sh@393 -- # pt= 00:18:53.719 15:17:22 -- scripts/common.sh@394 -- # return 1 00:18:53.719 15:17:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:18:53.719 15:17:22 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:18:53.719 15:17:22 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:18:53.719 15:17:22 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:53.719 15:17:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:53.719 15:17:22 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:18:53.719 15:17:22 -- nvmf/common.sh@654 -- # echo 1 00:18:53.719 15:17:22 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:18:53.719 15:17:22 -- nvmf/common.sh@656 -- # echo 1 00:18:53.719 15:17:22 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:18:53.719 15:17:22 -- nvmf/common.sh@663 -- # echo tcp 00:18:53.719 15:17:22 -- nvmf/common.sh@664 -- # echo 4420 00:18:53.719 15:17:22 -- nvmf/common.sh@665 -- # echo ipv4 00:18:53.719 15:17:22 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:53.978 15:17:22 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:819f6113-9743-44c3-be27-f14abf178c18 --hostid=819f6113-9743-44c3-be27-f14abf178c18 -a 10.0.0.1 -t tcp -s 4420 00:18:53.978 00:18:53.978 Discovery Log Number of Records 2, Generation counter 2 00:18:53.978 =====Discovery Log Entry 0====== 00:18:53.978 trtype: tcp 00:18:53.978 adrfam: ipv4 00:18:53.978 subtype: current discovery subsystem 00:18:53.978 treq: not specified, sq flow control disable supported 00:18:53.978 portid: 1 00:18:53.978 trsvcid: 4420 00:18:53.978 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:53.978 traddr: 10.0.0.1 00:18:53.978 eflags: none 00:18:53.978 sectype: none 00:18:53.978 =====Discovery Log Entry 1====== 00:18:53.978 trtype: tcp 00:18:53.978 adrfam: ipv4 00:18:53.978 subtype: nvme subsystem 00:18:53.978 treq: not specified, sq flow control disable supported 00:18:53.978 portid: 1 00:18:53.978 trsvcid: 4420 00:18:53.978 subnqn: kernel_target 00:18:53.978 traddr: 10.0.0.1 00:18:53.978 eflags: none 00:18:53.978 sectype: none 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:53.978 15:17:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:57.264 Initializing NVMe Controllers 00:18:57.264 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:57.264 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:57.264 Initialization complete. Launching workers. 00:18:57.264 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30775, failed: 0 00:18:57.264 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30775, failed to submit 0 00:18:57.264 success 0, unsuccess 30775, failed 0 00:18:57.264 15:17:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:57.264 15:17:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:00.551 Initializing NVMe Controllers 00:19:00.551 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:00.551 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:00.551 Initialization complete. Launching workers. 00:19:00.551 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64458, failed: 0 00:19:00.551 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27066, failed to submit 37392 00:19:00.551 success 0, unsuccess 27066, failed 0 00:19:00.551 15:17:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:00.551 15:17:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:03.852 Initializing NVMe Controllers 00:19:03.852 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:03.852 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:03.852 Initialization complete. Launching workers. 00:19:03.852 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 73232, failed: 0 00:19:03.852 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18266, failed to submit 54966 00:19:03.852 success 0, unsuccess 18266, failed 0 00:19:03.852 15:17:32 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:03.852 15:17:32 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:03.852 15:17:32 -- nvmf/common.sh@677 -- # echo 0 00:19:03.852 15:17:32 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:03.852 15:17:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:03.852 15:17:32 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:03.852 15:17:32 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:03.852 15:17:32 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:03.852 15:17:32 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:03.852 ************************************ 00:19:03.852 END TEST kernel_target_abort 00:19:03.852 ************************************ 00:19:03.852 00:19:03.852 real 0m10.524s 00:19:03.852 user 0m5.608s 00:19:03.852 sys 0m2.352s 00:19:03.852 15:17:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:03.852 15:17:32 -- common/autotest_common.sh@10 -- # set +x 00:19:03.852 15:17:32 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:03.852 15:17:32 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:03.852 15:17:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:03.852 15:17:32 -- nvmf/common.sh@116 -- # sync 00:19:03.852 15:17:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:03.852 15:17:32 -- nvmf/common.sh@119 -- # set +e 00:19:03.852 15:17:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:03.852 15:17:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:03.852 rmmod nvme_tcp 00:19:03.852 rmmod nvme_fabrics 00:19:03.852 rmmod nvme_keyring 00:19:03.852 15:17:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:03.852 15:17:32 -- nvmf/common.sh@123 -- # set -e 00:19:03.852 15:17:32 -- nvmf/common.sh@124 -- # return 0 00:19:03.852 Process with pid 75910 is not found 00:19:03.852 15:17:32 -- nvmf/common.sh@477 -- # '[' -n 75910 ']' 00:19:03.852 15:17:32 -- nvmf/common.sh@478 -- # killprocess 75910 00:19:03.852 15:17:32 -- common/autotest_common.sh@936 -- # '[' -z 75910 ']' 00:19:03.852 15:17:32 -- common/autotest_common.sh@940 -- # kill -0 75910 00:19:03.852 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75910) - No such process 00:19:03.852 15:17:32 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75910 is not found' 00:19:03.852 15:17:32 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:03.852 15:17:32 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:04.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:04.386 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:04.386 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:04.386 15:17:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:04.386 15:17:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:04.386 15:17:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.386 15:17:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:04.386 15:17:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.386 15:17:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:04.386 15:17:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.386 15:17:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:04.386 00:19:04.386 real 0m24.417s 00:19:04.386 user 0m49.576s 00:19:04.386 sys 0m5.623s 00:19:04.386 ************************************ 00:19:04.386 END TEST nvmf_abort_qd_sizes 00:19:04.386 ************************************ 00:19:04.386 15:17:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:04.386 15:17:33 -- common/autotest_common.sh@10 -- # set +x 00:19:04.386 15:17:33 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:04.386 15:17:33 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:19:04.386 15:17:33 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:19:04.386 15:17:33 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:19:04.386 15:17:33 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:19:04.386 15:17:33 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:19:04.386 15:17:33 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:19:04.386 15:17:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.386 15:17:33 -- common/autotest_common.sh@10 -- # set +x 00:19:04.386 15:17:33 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:19:04.386 15:17:33 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:19:04.386 15:17:33 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:19:04.386 15:17:33 -- common/autotest_common.sh@10 -- # set +x 00:19:06.290 INFO: APP EXITING 00:19:06.290 INFO: killing all VMs 00:19:06.290 INFO: killing vhost app 00:19:06.290 INFO: EXIT DONE 00:19:06.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:06.858 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:06.858 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:07.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:07.425 Cleaning 00:19:07.425 Removing: /var/run/dpdk/spdk0/config 00:19:07.425 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:07.425 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:07.425 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:07.425 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:07.425 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:07.425 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:07.425 Removing: /var/run/dpdk/spdk1/config 00:19:07.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:07.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:07.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:07.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:07.684 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:07.684 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:07.684 Removing: /var/run/dpdk/spdk2/config 00:19:07.684 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:07.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:07.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:07.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:07.685 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:07.685 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:07.685 Removing: /var/run/dpdk/spdk3/config 00:19:07.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:07.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:07.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:07.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:07.685 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:07.685 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:07.685 Removing: /var/run/dpdk/spdk4/config 00:19:07.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:07.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:07.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:07.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:07.685 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:07.685 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:07.685 Removing: /dev/shm/nvmf_trace.0 00:19:07.685 Removing: /dev/shm/spdk_tgt_trace.pid53832 00:19:07.685 Removing: /var/run/dpdk/spdk0 00:19:07.685 Removing: /var/run/dpdk/spdk1 00:19:07.685 Removing: /var/run/dpdk/spdk2 00:19:07.685 Removing: /var/run/dpdk/spdk3 00:19:07.685 Removing: /var/run/dpdk/spdk4 00:19:07.685 Removing: /var/run/dpdk/spdk_pid53680 00:19:07.685 Removing: /var/run/dpdk/spdk_pid53832 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54085 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54270 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54423 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54489 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54572 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54670 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54754 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54787 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54817 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54891 00:19:07.685 Removing: /var/run/dpdk/spdk_pid54972 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55412 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55464 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55509 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55525 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55587 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55603 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55670 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55686 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55726 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55744 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55784 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55802 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55937 00:19:07.685 Removing: /var/run/dpdk/spdk_pid55967 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56054 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56100 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56129 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56183 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56203 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56237 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56251 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56286 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56305 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56334 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56354 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56388 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56408 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56437 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56456 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56491 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56505 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56539 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56559 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56593 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56613 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56642 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56656 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56696 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56710 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56739 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56764 00:19:07.685 Removing: /var/run/dpdk/spdk_pid56793 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56807 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56847 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56861 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56896 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56915 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56944 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56964 00:19:07.944 Removing: /var/run/dpdk/spdk_pid56998 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57015 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57053 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57075 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57115 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57129 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57163 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57183 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57213 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57290 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57377 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57711 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57723 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57754 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57772 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57780 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57798 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57816 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57824 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57842 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57860 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57868 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57886 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57906 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57922 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57939 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57951 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57969 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57983 00:19:07.944 Removing: /var/run/dpdk/spdk_pid57995 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58013 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58038 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58056 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58084 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58148 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58175 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58184 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58213 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58222 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58230 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58270 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58282 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58308 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58316 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58323 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58330 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58333 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58340 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58348 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58355 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58382 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58408 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58418 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58445 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58456 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58458 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58504 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58510 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58542 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58544 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58551 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58559 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58561 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58574 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58576 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58589 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58659 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58701 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58807 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58844 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58888 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58897 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58917 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58932 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58961 00:19:07.944 Removing: /var/run/dpdk/spdk_pid58981 00:19:07.944 Removing: /var/run/dpdk/spdk_pid59056 00:19:07.944 Removing: /var/run/dpdk/spdk_pid59066 00:19:07.944 Removing: /var/run/dpdk/spdk_pid59107 00:19:07.944 Removing: /var/run/dpdk/spdk_pid59186 00:19:07.944 Removing: /var/run/dpdk/spdk_pid59231 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59265 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59358 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59404 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59430 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59659 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59751 00:19:08.203 Removing: /var/run/dpdk/spdk_pid59781 00:19:08.203 Removing: /var/run/dpdk/spdk_pid60111 00:19:08.203 Removing: /var/run/dpdk/spdk_pid60150 00:19:08.203 Removing: /var/run/dpdk/spdk_pid60458 00:19:08.203 Removing: /var/run/dpdk/spdk_pid60881 00:19:08.203 Removing: /var/run/dpdk/spdk_pid61150 00:19:08.203 Removing: /var/run/dpdk/spdk_pid61900 00:19:08.203 Removing: /var/run/dpdk/spdk_pid62731 00:19:08.203 Removing: /var/run/dpdk/spdk_pid62854 00:19:08.203 Removing: /var/run/dpdk/spdk_pid62916 00:19:08.203 Removing: /var/run/dpdk/spdk_pid64195 00:19:08.203 Removing: /var/run/dpdk/spdk_pid64418 00:19:08.203 Removing: /var/run/dpdk/spdk_pid64738 00:19:08.203 Removing: /var/run/dpdk/spdk_pid64848 00:19:08.203 Removing: /var/run/dpdk/spdk_pid64981 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65009 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65031 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65064 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65161 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65290 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65445 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65520 00:19:08.203 Removing: /var/run/dpdk/spdk_pid65913 00:19:08.203 Removing: /var/run/dpdk/spdk_pid66271 00:19:08.203 Removing: /var/run/dpdk/spdk_pid66273 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68491 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68494 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68778 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68792 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68812 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68841 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68848 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68936 00:19:08.203 Removing: /var/run/dpdk/spdk_pid68939 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69047 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69056 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69164 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69166 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69567 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69618 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69725 00:19:08.203 Removing: /var/run/dpdk/spdk_pid69807 00:19:08.203 Removing: /var/run/dpdk/spdk_pid70122 00:19:08.203 Removing: /var/run/dpdk/spdk_pid70323 00:19:08.203 Removing: /var/run/dpdk/spdk_pid70711 00:19:08.203 Removing: /var/run/dpdk/spdk_pid71240 00:19:08.203 Removing: /var/run/dpdk/spdk_pid71689 00:19:08.203 Removing: /var/run/dpdk/spdk_pid71748 00:19:08.203 Removing: /var/run/dpdk/spdk_pid71795 00:19:08.203 Removing: /var/run/dpdk/spdk_pid71852 00:19:08.203 Removing: /var/run/dpdk/spdk_pid71961 00:19:08.204 Removing: /var/run/dpdk/spdk_pid72020 00:19:08.204 Removing: /var/run/dpdk/spdk_pid72086 00:19:08.204 Removing: /var/run/dpdk/spdk_pid72145 00:19:08.204 Removing: /var/run/dpdk/spdk_pid72467 00:19:08.204 Removing: /var/run/dpdk/spdk_pid73660 00:19:08.204 Removing: /var/run/dpdk/spdk_pid73805 00:19:08.204 Removing: /var/run/dpdk/spdk_pid74043 00:19:08.204 Removing: /var/run/dpdk/spdk_pid74608 00:19:08.204 Removing: /var/run/dpdk/spdk_pid74769 00:19:08.204 Removing: /var/run/dpdk/spdk_pid74928 00:19:08.204 Removing: /var/run/dpdk/spdk_pid75025 00:19:08.204 Removing: /var/run/dpdk/spdk_pid75195 00:19:08.204 Removing: /var/run/dpdk/spdk_pid75298 00:19:08.204 Removing: /var/run/dpdk/spdk_pid75967 00:19:08.204 Removing: /var/run/dpdk/spdk_pid76002 00:19:08.204 Removing: /var/run/dpdk/spdk_pid76037 00:19:08.204 Removing: /var/run/dpdk/spdk_pid76287 00:19:08.204 Removing: /var/run/dpdk/spdk_pid76317 00:19:08.204 Removing: /var/run/dpdk/spdk_pid76352 00:19:08.204 Clean 00:19:08.463 killing process with pid 48045 00:19:08.463 killing process with pid 48048 00:19:08.463 15:17:37 -- common/autotest_common.sh@1446 -- # return 0 00:19:08.463 15:17:37 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:19:08.463 15:17:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.463 15:17:37 -- common/autotest_common.sh@10 -- # set +x 00:19:08.463 15:17:37 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:19:08.463 15:17:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.463 15:17:37 -- common/autotest_common.sh@10 -- # set +x 00:19:08.463 15:17:37 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:08.463 15:17:37 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:08.463 15:17:37 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:08.463 15:17:37 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:19:08.463 15:17:37 -- spdk/autotest.sh@383 -- # hostname 00:19:08.463 15:17:37 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:08.722 geninfo: WARNING: invalid characters removed from testname! 00:19:35.282 15:18:00 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:35.282 15:18:04 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:37.818 15:18:06 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:40.351 15:18:09 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:42.918 15:18:11 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:45.452 15:18:14 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:47.986 15:18:16 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:47.986 15:18:17 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:19:47.986 15:18:17 -- common/autotest_common.sh@1690 -- $ lcov --version 00:19:47.986 15:18:17 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:19:47.986 15:18:17 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:19:47.986 15:18:17 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:19:47.986 15:18:17 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:19:47.986 15:18:17 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:19:47.986 15:18:17 -- scripts/common.sh@335 -- $ IFS=.-: 00:19:47.986 15:18:17 -- scripts/common.sh@335 -- $ read -ra ver1 00:19:47.986 15:18:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:19:47.986 15:18:17 -- scripts/common.sh@336 -- $ read -ra ver2 00:19:47.986 15:18:17 -- scripts/common.sh@337 -- $ local 'op=<' 00:19:47.986 15:18:17 -- scripts/common.sh@339 -- $ ver1_l=2 00:19:47.986 15:18:17 -- scripts/common.sh@340 -- $ ver2_l=1 00:19:47.986 15:18:17 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:19:47.986 15:18:17 -- scripts/common.sh@343 -- $ case "$op" in 00:19:47.986 15:18:17 -- scripts/common.sh@344 -- $ : 1 00:19:47.986 15:18:17 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:19:47.986 15:18:17 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.986 15:18:17 -- scripts/common.sh@364 -- $ decimal 1 00:19:47.986 15:18:17 -- scripts/common.sh@352 -- $ local d=1 00:19:47.986 15:18:17 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:19:47.986 15:18:17 -- scripts/common.sh@354 -- $ echo 1 00:19:47.986 15:18:17 -- scripts/common.sh@364 -- $ ver1[v]=1 00:19:47.986 15:18:17 -- scripts/common.sh@365 -- $ decimal 2 00:19:47.986 15:18:17 -- scripts/common.sh@352 -- $ local d=2 00:19:47.986 15:18:17 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:19:47.986 15:18:17 -- scripts/common.sh@354 -- $ echo 2 00:19:47.986 15:18:17 -- scripts/common.sh@365 -- $ ver2[v]=2 00:19:47.986 15:18:17 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:19:47.986 15:18:17 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:19:47.986 15:18:17 -- scripts/common.sh@367 -- $ return 0 00:19:47.986 15:18:17 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.986 15:18:17 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:19:47.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.986 --rc genhtml_branch_coverage=1 00:19:47.986 --rc genhtml_function_coverage=1 00:19:47.986 --rc genhtml_legend=1 00:19:47.986 --rc geninfo_all_blocks=1 00:19:47.986 --rc geninfo_unexecuted_blocks=1 00:19:47.986 00:19:47.986 ' 00:19:47.986 15:18:17 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:19:47.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.986 --rc genhtml_branch_coverage=1 00:19:47.986 --rc genhtml_function_coverage=1 00:19:47.986 --rc genhtml_legend=1 00:19:47.986 --rc geninfo_all_blocks=1 00:19:47.986 --rc geninfo_unexecuted_blocks=1 00:19:47.986 00:19:47.986 ' 00:19:47.986 15:18:17 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:19:47.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.986 --rc genhtml_branch_coverage=1 00:19:47.986 --rc genhtml_function_coverage=1 00:19:47.986 --rc genhtml_legend=1 00:19:47.986 --rc geninfo_all_blocks=1 00:19:47.986 --rc geninfo_unexecuted_blocks=1 00:19:47.986 00:19:47.986 ' 00:19:47.986 15:18:17 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:19:47.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.986 --rc genhtml_branch_coverage=1 00:19:47.986 --rc genhtml_function_coverage=1 00:19:47.986 --rc genhtml_legend=1 00:19:47.986 --rc geninfo_all_blocks=1 00:19:47.986 --rc geninfo_unexecuted_blocks=1 00:19:47.986 00:19:47.986 ' 00:19:47.986 15:18:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.986 15:18:17 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:47.986 15:18:17 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.986 15:18:17 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.986 15:18:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.986 15:18:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.986 15:18:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.986 15:18:17 -- paths/export.sh@5 -- $ export PATH 00:19:47.986 15:18:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.986 15:18:17 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:47.986 15:18:17 -- common/autobuild_common.sh@440 -- $ date +%s 00:19:47.986 15:18:17 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730906297.XXXXXX 00:19:47.986 15:18:17 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730906297.c8U6pQ 00:19:47.986 15:18:17 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:19:47.986 15:18:17 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:19:47.986 15:18:17 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:47.986 15:18:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:47.986 15:18:17 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:47.986 15:18:17 -- common/autobuild_common.sh@456 -- $ get_config_params 00:19:47.986 15:18:17 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:19:47.986 15:18:17 -- common/autotest_common.sh@10 -- $ set +x 00:19:47.986 15:18:17 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:19:47.986 15:18:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:47.986 15:18:17 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:47.986 15:18:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:47.986 15:18:17 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:19:47.986 15:18:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:47.986 15:18:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:47.986 15:18:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:47.986 15:18:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:47.986 15:18:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:47.986 15:18:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:47.986 + [[ -n 5231 ]] 00:19:47.987 + sudo kill 5231 00:19:48.254 [Pipeline] } 00:19:48.271 [Pipeline] // timeout 00:19:48.276 [Pipeline] } 00:19:48.293 [Pipeline] // stage 00:19:48.298 [Pipeline] } 00:19:48.313 [Pipeline] // catchError 00:19:48.323 [Pipeline] stage 00:19:48.325 [Pipeline] { (Stop VM) 00:19:48.338 [Pipeline] sh 00:19:48.618 + vagrant halt 00:19:52.816 ==> default: Halting domain... 00:19:58.178 [Pipeline] sh 00:19:58.464 + vagrant destroy -f 00:20:01.752 ==> default: Removing domain... 00:20:01.764 [Pipeline] sh 00:20:02.045 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:02.054 [Pipeline] } 00:20:02.071 [Pipeline] // stage 00:20:02.076 [Pipeline] } 00:20:02.091 [Pipeline] // dir 00:20:02.099 [Pipeline] } 00:20:02.123 [Pipeline] // wrap 00:20:02.129 [Pipeline] } 00:20:02.141 [Pipeline] // catchError 00:20:02.153 [Pipeline] stage 00:20:02.166 [Pipeline] { (Epilogue) 00:20:02.223 [Pipeline] sh 00:20:02.503 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:07.787 [Pipeline] catchError 00:20:07.789 [Pipeline] { 00:20:07.802 [Pipeline] sh 00:20:08.083 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:08.083 Artifacts sizes are good 00:20:08.093 [Pipeline] } 00:20:08.107 [Pipeline] // catchError 00:20:08.118 [Pipeline] archiveArtifacts 00:20:08.126 Archiving artifacts 00:20:08.248 [Pipeline] cleanWs 00:20:08.290 [WS-CLEANUP] Deleting project workspace... 00:20:08.290 [WS-CLEANUP] Deferred wipeout is used... 00:20:08.296 [WS-CLEANUP] done 00:20:08.298 [Pipeline] } 00:20:08.313 [Pipeline] // stage 00:20:08.318 [Pipeline] } 00:20:08.332 [Pipeline] // node 00:20:08.336 [Pipeline] End of Pipeline 00:20:08.373 Finished: SUCCESS